How can I force fsck on next boot on Red Hat Enterprise Linux?

By default, the fsck utility is run on every boot. For ext3 filesystems, the boot scripts do a quick check to see if the filesystem journal indicates the file system is clean. If the initial check passes no further checking is performed. Otherwise, the user is prompted to run a full fsck check.

You can force an automatic full check by changing the check interval using tune2fs (-c and/or -i). For example:

# tune2fs -c 1 /dev/hda2

The above command would tell the init scripts to run fsck on hda2 at every boot.

# tune2fs -i 1d /dev/hda2

The above command would tell the init scripts to run fsck on hda2 after 1 day.

If you only want to run fsck on the next boot, please execute the following as the root user:

# cd /
# touch forcefsck

This will only run the file system check on the next reboot. By touching the file “forcefsck” in the / directory, it will force the system to perform a full file system check.

The file “forcefsck” will be deleted automatically after fsck is finished.

Note: For systems with large disks, fsck on boot may take a long time to run depending on system speed and disk sizes.

How to reload sysctl.conf variables on Linux

The sysctl command is used to modify Linux kernel variables at runtime. The variables are read and write from /proc/sys/ location using procfs. The syntax is as follows for to define variable:


Read variable from command line

Type the following command
$ sysctl kernel.ostype
Sample outputs:

kernel.ostype = Linux
To see all variables pass the -a option:
 $ sysctl -a
 $ sysctl -a | grep kernel
 $ sysctl -a | more

Write variable from command line

The syntax is:
# sysctl -w variable=value
To enable packet forwarding for IPv4, enter:
# sysctl -w net.ipv4.ip_forward=1

Reload settings from all system configuration files

Type the following command to reload settings from config files without rebooting the box:
# sysctl --system
The settings are read from all of the following system configuration files:

  1. /run/sysctl.d/*.conf
  2. /etc/sysctl.d/*.conf
  3. /usr/local/lib/sysctl.d/*.conf
  4. /usr/lib/sysctl.d/*.conf
  5. /lib/sysctl.d/*.conf
  6. /etc/sysctl.conf

Persistent configuration

You need to edit the /etc/sysctl.conf file for setting system variables:
# vi /etc/sysctl.conf
Modify or add in the file. Close and save the file. To Load in sysctl settings from the file specified or /etc/sysctl.conf if none given, enter:
# sysctl -p



Clearing docker disk space to reclaim

docker pull & docker build creates new docker images. Each layer is cached &  uses aufs, so it decreases disk usage by itself, but it’s also leaving previous versions / layers dangling.

We can remove untagged images by running:

docker images –no-trunc | grep ‘<none>’ | awk ‘{ print $3 }’  | xargs -r docker rmi

docker run leaves the container by default. This is convenient if you’d like to review the process later — look at the logs or exit status. This also stores the aufs filesystem changes, so you can commit the container as a new image.

This can be expensive in terms of disk space usage, especially during testing. Remember to use docker run –rm flag if you don’t need to inspect the container later. This flag doesn’t work with background containers (-d), so you’ll be left with finished containers anyway. Clean up dead and exited containers using command:

docker ps –filter status=dead –filter status=exited -aq \ | xargs docker rm -v

docker rm does not remove the volumes created by the container. I can’t figure out why would the default be this way, but you need to use the -v flag to remove the volumes along the container.

Docker filesystem storage and volumes

There are three main ways docker stores files:

By default, everything you save to disk inside the container is saved in the aufs layer. This doesn’t create problems if you clean up unused containers and images.
If you mount a file or directory from the host (using docker run -v /host/path:/container/path …) the files are stored in the host filesystem, so it’s easy to track them and there is no problem also.
The third way are docker volumes. Those are special paths that are mapped to a special directory in /var/lib/docker/volumes/ path on the host. A lot of images use volumes to share files between containers (using the volumes-from option) or persist data so you won’t lose them after the process exits (the data-only containers pattern).
Now, since there is no tool to list volumes and their state, it’s easy to leave them on disk even after all processes exited and all containers are removed. The following command inspects all containers (running or not) and compares them to created volumes, printing only the paths that are not referenced by any container:

#!/usr/bin/env bash

find ‘/var/lib/docker/volumes/’ -mindepth 1 -maxdepth 1 -type d | grep -vFf <(
docker ps -aq | xargs docker inspect | jq -r ‘.[]|.Mounts|.[]|.Name|select(.)’

What it does, step by step:

List all created volumes
List all containers and inspect them, creating a JSON array with all the entries
Format the output using jq to get all the names of every mounted volume
Exclude (grep -vFf) mounted volumes form the list of all volumes
You need to run this as root and have jq utility present.

The command doesn’t remove anything, but simply passing the results to xargs -r rm -fr does so.

docker volume ls -qf dangling=true | xargs -r docker volume rm

Save the following script to clean up everything at once:


# remove exited containers:
docker ps –filter status=dead –filter status=exited -aq | xargs -r docker rm -v

# remove unused images:
docker images –no-trunc | grep ‘<none>’ | awk ‘{ print $3 }’ | xargs -r docker rmi

# remove unused volumes:
find ‘/var/lib/docker/volumes/’ -mindepth 1 -maxdepth 1 -type d | grep -vFf <(
docker ps -aq | xargs docker inspect | jq -r ‘.[] | .Mounts | .[] | .Name | select(.)’
) | xargs -r rm -fr

useful docker cleanup commands

docker ps -a | grep “Exit” | awk ‘{print $1}’ | xargs -I{} docker stop {}

docker ps -a | grep “Exit” | awk ‘{print $1}’ | xargs -I{}

docker kill {} docker ps -a | grep “Exit” | awk ‘{print $1}’ | xargs -I{}

docker rm {} docker images | grep “<none>” | awk ‘{print $3}’ | xargs docker rmi

OpenShift CLI

OpenShift CLI: oc

The OpenShift CLI “oc” can be installed on your computer, see Get Started with the CLI. Before issuing an oc command, you must login to the OpenShift master with oc login. It will ask for an URL to the master (if started for the first time) and for a username and password.

  • oc whoami
    If you can’t remember who you are, this tells it to you.
  • oc project $NAME
    Shows the currently active project to which all commands are run against. If a project ame is added to the command then the currently active project changes.
  • oc get projects
    Displays a list of projects to which the current user has access to
  • oc status
    Status overview of the current project
  • oc describe $TYPE $NAME
    Detailed information about an object.oc describe pod drupal-openshift-1-m3uvx
  • oc get event
    Shows all events in the current project. Very useful for finding out what happened.
  • oc logs [-f] $PODNAME
    Show the logs of a running pod. With -f it tails the log much like tail -f.
  • oc get pod [-w]
    List of pods in the current project. With -w it shows changes in pods. Note: watch oc get pod is a helpful way to watch for pod changes
  • oc rsh $PODNAME
    Start a remote shell in the running pod to execute commands
  • oc exec $PODNAME $COMMAND
    Execute a command in the running pod. The command’s output is sent to your shell.
  • oc delete events –all
    Cleanup all events. Useful if there are a lot of old events. Events are information about what is going on on the API objects and what problems exist (if there are any).
  • oc get builds
    List of builds. A build is a process of creating runnable images to be used on OpenShift.

oc logs build/$BUILDID
Build log of the build with the id “buildid”. This corresponds to the list of builds which are displayed with the command above.

MySql – Manage users and privileges

Use the instructions in this section to add users for the database and grant and revoke privileges.

Add users and privileges

When applications connect to the database using the root user, they usually have more privileges than they need. You can create a new user that applications can use to connect to the new database. In the following example, a user named demouser is created.

To create a new user, run the following command in the mysql shell:

mysql>CREATE USER ‘demouser‘@’localhost’ IDENTIFIED BY ‘demopassword‘;

You can verify that the user was created by running a SELECT query again:

SELECT User, Host, Password FROM mysql.user;
| User | Host | Password |
| root | localhost | 2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |
| root | demohost | 2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |
| root | | 2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19 |
| demouser | localhost | 0756A562377EDF6ED3AC45A00B356AAE6D3C6BB6 |

Grant database user privileges

Right after you create a new user, it has no privileges. The user can be used to log in to MySQL, but it can’t be used to make any database changes.

Give the user full privileges for your new database by running the following command:

GRANT ALL PRIVILEGES ON demodb.* to demouser@localhost;

Flush the privileges to make the change take effect.

Mysql> Flush Privileges

To verify that the privileges were set, run the following command:

SHOW GRANTS FOR ‘demouser‘@’localhost’;

MySQL returns the commands needed to reproduce that user’s privileges if you were to rebuild the server. The USAGE on \*.\* part means that the user gets no privileges on anything by default. That command is overridden by the second command, which is the grant you ran for the new database.

| Grants for demouser@localhost |
| GRANT USAGE ON *.* TO ‘demouser’@’localhost’ IDENTIFIED BY PASSWORD ‘*0756A562377EDF6ED3AC45A00B356AAE6D3C6BB6’ |
| GRANT ALL PRIVILEGES ON `demodb`.* TO ‘demouser’@’localhost’ |
2 rows in set (0.00 sec)

Revoke privileges

Sometimes you might need to revoke (remove) privileges from a user. For example: suppose that you were granting ALL privileges to ‘demouser’@’localhost’, but you accidentally granted privileges to all other databases, too:

+——————————————————————————————–+ | Grants for demouser@localhost | +——————————————————————————————–+ | GRANT USAGE ON *.* TO ‘demouser’@’localhost’ IDENTIFIED BY PASSWORD ‘*0756A562377EDF6ED3AC45A00B356AAE6D3C6BB6’ | | GRANT ALL PRIVILEGES ON *.* TO ‘demouser’@’localhost’ | +——————————————————————————————–+ 2 rows in set (0.00 sec)

To correct the mistake, you can use a REVOKE statement, followed by GRANT statement to apply the correct privileges.

REVOKE ALL ON *.* FROM demouser@localhost; GRANT ALL PRIVILEGES ON demodb.* to demouser@localhost; SHOW GRANTS FOR ‘demouser’@’localhost’; +——————————————————————————————–+ | Grants for demouser@localhost | +——————————————————————————————–+ | GRANT USAGE ON *.* TO ‘demouser’@’localhost’ IDENTIFIED BY PASSWORD ‘*0756A562377EDF6ED3AC45A00B356AAE6D3C6BB6’ | | GRANT ALL PRIVILEGES ON ‘demodb’TO ‘demouser’@’localhost’ | +——————————————————————————————–+ 2 rows in set (0.00 sec)

Now your user has the correct privileges, and therefore your database server is slightly more secure (granting privileges like ALL on *.* is deemed as a very bad practice). You should also read official MySQL documentation regarding possible privilege choices, to grant only those privileges truly needed, rather than using ALL