[Archived] Shared Storage (Ceph) Jewel

Use the :latest-mimic tag rather than latest.

e.g.

docker run -d --net=host
–privileged=true
–pid=host
-v /etc/ceph:/etc/ceph
-v /var/lib/ceph/:/var/lib/ceph/
-v /dev/:/dev/
-e OSD_FORCE_ZAP=1
-e OSD_DEVICE=/dev/sdb
-e OSD_TYPE=disk
–name=“ceph-osd”
–restart=always
ceph/daemon:latest-mimic osd_ceph_disk

I’ve not worked out if it’s needed to use the mimic version for all the other containers, still testing.

Ref: ceph-disk command not found · Issue #1324 · ceph/ceph-container · GitHub and daemon/osd: Migrate ceph-disk to ceph-volume by dsavineau · Pull Request #1325 · ceph/ceph-container · GitHub

after ceph luminous “ceph-disk” command has been replaced by the “ceph-volume” command. I successfully passing this step by using ceph/daemon:latest-luminous images. But, now I’m stuck at OSD step

2019-06-19 07:32:03.898799 7f9faef02d80 -1 osd.1 0 log_to_monitors {default=true}
2019-06-19 07:32:05.015088 7f9f968c8700 -1 osd.1 0 waiting for initial osdmap

Has anyone been able to make it work in ubuntu 18.04?

Hi,

my OSD container is not running with the same error. I don’t know why it’s getting different id.

Can you help me?

My OSD container log:

docker run --net=host
–privileged=true
–pid=host
-v /etc/ceph/:/etc/ceph/
-v /var/lib/ceph/:/var/lib/ceph/
-v /dev/:/dev/
-e OSD_FORCE_ZAP=1
-e OSD_DEVICE=/dev/sdb
-e OSD_TYPE=disk
–name=“ceph-osd”
–restart=always
ceph/daemon:latest-luminous osd_ceph_disk
2020-04-01 21:10:57 /opt/ceph-container/bin/entrypoint.sh: static: does not generate config
HEALTH_WARN 1 MDSs report slow metadata IOs; noscrub,nodeep-scrub flag(s) set; Reduced data availability: 16 pgs inactive; OSD count 0 < osd_pool_default_si ze 3
2020-04-01 21:10:57 /opt/ceph-container/bin/entrypoint.sh: INFO: It looks like /dev/sdb is an OSD
2020-04-01 21:10:57 /opt/ceph-container/bin/entrypoint.sh: You can use the zap_device scenario on the appropriate device to zap it
2020-04-01 21:10:57 /opt/ceph-container/bin/entrypoint.sh: Moving on, trying to activate the OSD now.
main_activate: path = /dev/sdb1
get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
command: Running command: /usr/sbin/blkid -o udev -p /dev/sdb1
command: Running command: /sbin/blkid -p -s TYPE -o value – /dev/sdb1
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.KtFUpi with options noatime,inode64
command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 – /dev/sdb1 /var/lib/ceph/tmp/mnt.KtFUpi
command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.KtFUpi
activate: Cluster uuid is d50c7ccf-5a64-4726-bdd0-5e9a6084298d
command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
mount_activate: Failed to activate
unmount: Unmounting /var/lib/ceph/tmp/mnt.KtFUpi
command_check_call: Running command: /bin/umount – /var/lib/ceph/tmp/mnt.KtFUpi
Traceback (most recent call last):
File “/usr/sbin/ceph-disk”, line 9, in
load_entry_point(‘ceph-disk==1.0.0’, ‘console_scripts’, ‘ceph-disk’)()
File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 5736, in run
main(sys.argv[1:])
File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 5687, in main
args.func(args)
File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3777, in main_activate
reactivate=args.reactivate,
File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3540, in mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3687, in activate
’ with fsid %s’ % ceph_fsid)
ceph_disk.main.Error: Error: No cluster conf found in /etc/ceph with fsid d50c7ccf-5a64-4726-bdd0-5e9a6084298d

Hi There!

When I run the following commands I get the following errors (as root on Ubuntu 18.04.4 LTS)

$ chcon -Rt svirt_sandbox_file_t /etc/ceph/; chcon -Rt svirt_sandbox_file_t /var/lib/ceph/
chcon: can’t apply partial context to unlabeled file ‘/etc/ceph’
chcon: can’t apply partial context to unlabeled file ‘/var/lib/ceph’

Any idea whether I need to be concerned with the errors (and if so how to fix) before I continue with your guide?

Thanks!

I am trying also to do the same on Ubuntu 18.04 (also tried 20.04 beta). I also struggle with following errors in early config steps:

cd@node1:~$ sudo apt install ceph-base
Reading package lists... Done
Building dependency tree
Reading state information... Done
ceph-base is already the newest version (15.2.1-0ubuntu1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 

cd@node1:~$ sudo chcon -Rt svirt_sandbox_file_t /var/lib/ceph
[sudo] password for cd:
chcon: can't apply partial context to unlabeled file 'bootstrap-rbd-mirror'
chcon: can't apply partial context to unlabeled file 'bootstrap-mds'
chcon: can't apply partial context to unlabeled file 'posted'
chcon: can't apply partial context to unlabeled file 'crash'
chcon: can't apply partial context to unlabeled file 'bootstrap-osd'
chcon: can't apply partial context to unlabeled file 'mds'
chcon: can't apply partial context to unlabeled file 'bootstrap-rbd'
chcon: can't apply partial context to unlabeled file 'bootstrap-mgr'
chcon: can't apply partial context to unlabeled file 'bootstrap-rgw'
chcon: can't apply partial context to unlabeled file 'tmp'
chcon: can't apply partial context to unlabeled file '/var/lib/ceph'
cd@node1:~$

cd@node1:~$ sudo chcon -Rt svirt_sandbox_file_t /etc/ceph/
chcon: can't apply partial context to unlabeled file 'rbdmap'
chcon: can't apply partial context to unlabeled file '/etc/ceph/'
cd@node1:~$

cd@node1:~$ sudo docker run -d --net=host --restart always -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -e MON_IP=192.168.254.10 -e CEPH_PUBLIC_NETWORK=192.168.254.0/24 --name="ceph-mon" ceph/daemon mon
bd1244fc9acf92c5bdff9050a3297aeddefb56165817968fba364906d2687675
docker: Error response from daemon: error while creating mount source path '/var/lib/ceph': mkdir /var/lib/ceph: read-only file system.
cd@node1:~$

cd@node1:~$ sudo ls -lisa /var/lib/ceph/
total 44
65541 4 drwxr-x--- 11 ceph ceph 4096 Apr 22 17:42 .
14226 4 drwxr-xr-x 41 root root 4096 Apr 22 17:42 ..
65549 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-mds
65550 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-mgr
65551 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-osd
65552 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-rbd
65553 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-rbd-mirror
65554 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 bootstrap-rgw
65555 4 drwxr-xr-x  3 ceph ceph 4096 Apr 22 17:42 crash
65559 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 mds
65557 4 drwxr-xr-x  2 ceph ceph 4096 Apr 17 21:08 tmp
cd@node1:~$

Any idea?

Hey, sorry that this recipe is giving you so much grief! I know it’s based on an old version of ceph, and certain elements don’t work properly anymore. There’s an effort underway to update the recipe to the latest ceph versions - you’re welcome to join in the conversation at http://chat.funkypenguin.co.nz, in the #dev channel. Else watch this space for an update in the next few days :wink:

D

@funkypenguin Any update on when the updated recipe will be ready? Thank you for all your hard work :slight_smile:

@waynehaffenden I’ve been working through the instructions Ceph has posted to use cephadm to install and configure a new cluster using Docker containers but have run into an issue. I filed this bug (Bug #45672: Unable to add additional hosts to cluster using cephadm - Orchestrator - Ceph) a bit ago (had to wait until they activated my account on their bug tracker).

1 Like