June 15

Linux: How to open multiple xterm windows while running a command in bash

My goal was to ssh to several servers at the same time. I wanted to open a new terminal for each connection. Then within the terminal, run a separate bash session launching my ssh connection. You have to launch within bash if you want to be able to continue using the terminal session after you exit your ssh session. I also wanted each terminal window to open at a different location on my screen,

The code below does the following
1. Generates six random horizontal numbers
2. Generates six random vertical numbers
3. starts each xterm application at random location, launching the command inside of a bash shell.

#!/bin/bash
data=($( shuf -i 75-1000 -n 6))
data2=($( shuf -i 75-450 -n 6))
#echo ${data[1]}
#echo ${data[2]}
xterm -geometry 150×32+${data[1]}+${data2[1]} -e bash -c ‘command; bash’ &
xterm -geometry 150×32+${data[2]}+${data2[2]} -e bash -c ‘command; bash’ &
xterm -geometry 150×32+${data[3]}+${data2[3]} -e bash -c ‘command; bash’ &
xterm -geometry 150×32+${data[4]}+${data2[4]} -e bash -c ‘command; bash’ &
xterm -geometry 150×32+${data[5]}+${data2[5]} -e bash -c ‘command; bash’ &
xterm -geometry 150×32+${data[6]}+${data2[6]} -e bash -c ‘command; bash’ &

By: nighthawk

Category: Linux | Comments Off on Linux: How to open multiple xterm windows while running a command in bash
May 16

Linux: Only two of four gluster servers are receiving write requests

I talked with some gluster admins concerning  an issue I had with only two of the four servers answering write requests.  I had originally created my cluster to only include two servers and two bricks. Even though I delete the original settings there is data store in gluster’s DHT(Distributed Hash Table). We proved this theory by having me create a new directory and then writing files into the new directory. The files dispersed as they were supposed to. The fix is to do a gluster volume rebalance on the volume with the issue. This did correct the problem.

Category: Linux | Comments Off on Linux: Only two of four gluster servers are receiving write requests
May 11

Linux: Gluster storage and replication options explained

One of the key issues I see with any storage clustering is the loss of available storage. If I have four servers and each server has a 1 TB drive, how I configure the cluster determines the amount of available storage space. It comes down to how much redundancy does a person want vs how much storage space.

Replication Examples:
4 servers
1 TB drive each server

Distributed(default) glusterfs volume:
gluster volume create test-volume transport tcp fscluster1:/exp1 fscluster2:/exp2 fscluster3:/exp1 fscluster4:/exp2

Total storage available = 4TB
Complete files are store on one of the four servers. If you lose a server, the files that were store on that server are now gone. All other files will remain on the servers that are still active.

Replicated gluster volume:
gluster volume create test-volume replica 4 transport tcp fscluster1:/exp1 fscluster2:/exp2 fscluster3:/exp1 fscluster4:/exp2

Total storage available = 1TB

When replicating with all of the  servers, one loses a lot of available storage. In this case 3/4ths of my space is in use, but I have incredible redundancy. I can lose up to three servers and still have all of my data.

Distributed/Replicated gluster volume:
gluster volume create test-volume replica 2 transport tcp fscluster1:/exp1 fscluster2:/exp2 fscluster3:/exp1 fscluster4:/exp2

Total storage available = 2TB

Group1
fscluster1 and fscluster2 = 1TB
Group2
fscluster3 and fscluster4 = 1TB

In this example there are two groups of servers.  The servers in a group are replicating the files with each other.   You can lose any single server in the group and your files are still completely available. When using round RRDNS, files can end up stored on either group. From a gluster client perspective they appear to be on one hard drive. In reality any single file is on two hard drives, on two servers that are in the same single group. If a server in the group goes down then the single server will write the file to the other group.

Stripes
The next two types are striped volumes. Neither offers any file redundancy as the files are strip across the servers drives. Stripe volumes theoretically would be faster do to having more “arms” doing the work. I have not tested this yet.

Striped gluster volume:
gluster volume create test-volume stripe 4 transport tcp fscluster1:/exp1 fscluster2:/exp2 fscluster3:/exp1 fscluster4:/exp2
Total storage available = 1TB

A single file is spread across all four servers.  All four servers need to be running to access all files.

Distributed striped glusterfs volume:
gluster volume create test-volume stripe 2 transport tcp fscluster1:/exp1 fscluster2:/exp2 fscluster3:/exp1 fscluster4:/exp2
Total storage available = 2TB

Group1
fscluster1 and fscluster2 = 1TB
Group2
fscluster3 and fscluster4 = 1TB

There are two groups.  A single file will be stored in one of the two groups, but across both of the servers in the group.  Both server in the group must be running to access the file that is store on it.  When using round RRDNS, files can end up stored on either group. From a gluster client perspective they appear to be on one hard drive. In reality any single file is spread across two hard drives, on two servers that are in the same single group. If a server in the group goes down then the single server will write the file to the other group.

Category: Linux | Comments Off on Linux: Gluster storage and replication options explained
May 10

Linux: Setting up a basic gluster storage cluster

Setup Centos server
Update the server
yum -y update

Assign IP

set the hostname
hostnamectl set-hostname fscluster1 –static

Disable SELINUX
nano /etc/sysconfig/selinux

Disable the Firewall
systemctl status firewalld
systemctl disable firewalld
systemctl stop firewalld
iptables -F

Reboot

DNS
At this point DNS is important. Make certain your servers ar in you local DNS server in order for them to talk to each other by short and FQDN. Use RRDNS to allow the client the ability to talk with all servers in a cluster.

Setup repos and get needed software
cd Downloads/
wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm
rpm -ivh epel-release-7-6.noarch.rpm
yum update -y

Install gluster
yum install glusterfs-server
service glusterd start
service glusterd status

Add glusterd to startup
systemctl enable glusterd

Create the directory structure for mounting the new drives (I am doing two new drives)
mkdir -p /data/brick1/gv0
mkdir -p /data/brick2/gv0

Setup File System
fdisk -l
fdisk /dev/sdb
n
p
1
Enter
Enter
w
mkfs.btrfs /dev/sdb1
fdisk /dev/sdc
n
p
1
Enter
Enter
w
mkfs.btrfs /dev/sdc1

Add new drives to fstab for automounting
echo “/dev/sdb1 /data/brick1 btrfs defaults 0 0” >> /etc/fstab
echo “/dev/sdc1 /data/brick2 btrfs defaults 0 0” >> /etc/fstab
mount -a
df -h

After doing this setup to at least one more server gluster can be configure.
From fscluster1
gluster peer probe fscluster2.domainname.local
gluster peer status
gluster pool list

Create volume
gluster volume create datavol1 replica 2 transport tcp fscluster1.domainname.local:/data/brick1/gv0 fscluster2.domainname.local:/data/brick1/gv0 force
gluster volume start datavol1
gluster volume info

Create a glusterfs mount point to connect to on each server
mkdir -p /mnt/gv01
Add mount to fstab (be sure to specify the appropriate server name)
echo “fscluster1.riverview.local:datavol1 /mnt/gv01 glusterfs defaults 0 0” >> /etc/fstab
mount -a

Install glusterfs on a client (I am using Ubuntu 16.04)
sudo apt-get install glusterfs-client

Create a mount point on you client
mkdir /mnt/fscluster

mount the gluster file system
sudo mount -t glusterfs fscluster.domainname.local:/datavol1 /mnt/fscluster

Other information:

Possible Connection Issues
gluster peer status – Will show what servers are part of the cluster and their connection status
gluster volume info – Will show what servers and bricks are connected in the cluster.  It will also display the clusters brick configuration.
A four server Distributed/Replicated Example:
____________________________________
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: 6689132a-6221-4fee-ba6d-892b7d0fc7f5
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: fscluster1.domainname.local:/data/brick1/gv0
Brick2: fscluster2.domainname.local:/data/brick1/gv0
Brick3: fscluster3.domainname.local:/data/brick1/gv0
Brick4: fscluster4.domainname.local:/data/brick1/gv0
Options Reconfigured:
performance.readdir-ahead: on
____________________________________
netstat -apt | grep glusterfsdThe output of this command should show each of your connected servers.  If not restarted glusterfsd on each server that is not showing up and try the command again

To delete a data volume
gluster volume stop datavol1
gluster volume delete datavol1

If you have problems with creating a volume sometime it is do to a connection issue. Troubleshoot with telnet.
yum install telnet
telnet (name or ip of server) 24007

If a data volume is not created right and reminant remain you can use the following command to clean things up
(Possible error when recreating a volume: Staging failed on fscluster2.domainname.local. Error: /data/brick1/gv0 is already part of a volume)
Do the follow:
setfattr -x trusted.glusterfs.volume-id /$pathtobrick
setfattr -x trusted.gfid /$pathtobrick
rm -rf /$pathtobrick/.glusterfs
Example:
________________________________________
setfattr -x trusted.glusterfs.volume-id /data/brick1/gv0
setfattr -x trusted.gfid /data/brick1/gv0
rm -rf /data/brick1/.glusterfs
________________________________________

By: nighthawk

Category: Linux | Comments Off on Linux: Setting up a basic gluster storage cluster
April 21

Linux: Local domain addresses are not resolving in Ubuntu

In Ubuntu 14.04 local domain addresses are not resolving properly.  nslookup and dig commands work.  This is because they use a different process to query DNS information.  To fix the issue you need to modify /etc/nsswitch.conf

Replace:
hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4

with:
hosts: files dns

By doep, dragouf, and nighthawk

Category: Linux | Comments Off on Linux: Local domain addresses are not resolving in Ubuntu