Centos 7에 ZFS (0.6.5)를 설치했으며 zpool도 만들었습니다. 재부팅시 데이터 세트가 사라진다는 사실과는 별개로 작동합니다.
다양한 온라인 리소스와 블로그를 사용 하여이 문제를 디버깅하려고했지만 원하는 결과를 얻지 못했습니다.
재부팅 후, 나는이 실행할 때 zfs list
명령을 내가 얻을 "사용 가능한 데이터 셋" 하고 zpool list
있습니다 "아니오 풀 가능"
온라인으로 많은 연구를 수행 한 후, 나는 그것을 수동으로 사용하여 캐시 파일을 가져 와서 작업을 만들 수 는 zpool 가져 오기 -c과 CacheFile을 하지만, 여전히 재부팅 후에 나중에 가져 오기 위해 재부팅 전에 zpool set cachefile = / etc / zfs / zpool.cache 풀을 실행 해야했습니다.
이것은 무엇입니까systemctl status zfs-import-cache
처럼 보인다
zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static)
Active: inactive (dead)
cat /etc/sysconfig/zfs
# ZoL userland configuration.
# Run `zfs mount -a` during system start?
ZFS_MOUNT='yes'
# Run `zfs unmount -a` during system stop?
ZFS_UNMOUNT='yes'
# Run `zfs share -a` during system start?
# nb: The shareiscsi, sharenfs, and sharesmb dataset properties.
ZFS_SHARE='yes'
# Run `zfs unshare -a` during system stop?
ZFS_UNSHARE='yes'
# Specify specific path(s) to look for device nodes and/or links for the
# pool import(s). See zpool(8) for more information about this variable.
# It supersedes the old USE_DISK_BY_ID which indicated that it would only
# try '/dev/disk/by-id'.
# The old variable will still work in the code, but is deprecated.
#ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
# Should the datasets be mounted verbosely?
# A mount counter will be used when mounting if set to 'yes'.
VERBOSE_MOUNT='no'
# Should we allow overlay mounts?
# This is standard in Linux, but not ZFS which comes from Solaris where this
# is not allowed).
DO_OVERLAY_MOUNTS='no'
# Any additional option to the 'zfs mount' command line?
# Include '-o' for each option wanted.
MOUNT_EXTRA_OPTIONS=""
# Build kernel modules with the --enable-debug switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG='no'
# Build kernel modules with the --enable-debug-dmu-tx switch?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no'
# Keep debugging symbols in kernel modules?
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_DKMS_DISABLE_STRIP='no'
# Wait for this many seconds in the initrd pre_mountroot?
# This delays startup and should be '0' on most systems.
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_PRE_MOUNTROOT_SLEEP='0'
# Wait for this many seconds in the initrd mountroot?
# This delays startup and should be '0' on most systems. This might help on
# systems which have their ZFS root on a USB disk that takes just a little
# longer to be available
# Only applicable for Debian GNU/Linux {dkms,initramfs}.
ZFS_INITRD_POST_MODPROBE_SLEEP='0'
# List of additional datasets to mount after the root dataset is mounted?
#
# The init script will use the mountpoint specified in the 'mountpoint'
# property value in the dataset to determine where it should be mounted.
#
# This is a space separated list, and will be mounted in the order specified,
# so if one filesystem depends on a previous mountpoint, make sure to put
# them in the right order.
#
# It is not necessary to add filesystems below the root fs here. It is
# taken care of by the initrd script automatically. These are only for
# additional filesystems needed. Such as /opt, /usr/local which is not
# located under the root fs.
# Example: If root FS is 'rpool/ROOT/rootfs', this would make sense.
#ZFS_INITRD_ADDITIONAL_DATASETS="rpool/ROOT/usr rpool/ROOT/var"
# List of pools that should NOT be imported at boot?
# This is a space separated list.
#ZFS_POOL_EXCEPTIONS="test2"
# Optional arguments for the ZFS Event Daemon (ZED).
# See zed(8) for more information on available options.
#ZED_ARGS="-M"
이것이 알려진 문제인지 확실하지 않습니다. 그렇다면, 이에 대한 해결 방법이 있습니까? 재부팅 후 캐시 파일의 오버 헤드없이 데이터 세트를 유지하는 쉬운 방법 일 것입니다.
zpool status -v
zpool status -v no pools available
그리고 zpool import
저에게 이것을주세요pool: zfsPool id: 10064980395446559551 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zfsPool ONLINE sda4 ONLINE
systemctl status zfs.target