How to Deploy a DNS Server on CentOS 7

In this step-by-step tutorial I’ll try to make it as faster as I can in the way you will not waste your entire life reading to make something working.

I’ll not explain every detail and what is a DNS Server (but I promisse I can make a post regarding this topic for those who don’t know).

This Post is to make you do what you need to do without bla bla bla.

The initial setup in my case is:

Hostname: ns.zlab.com

IPADDR=192.168.193.2
NETMASK=255.255.255.0
GATEWAY=192.168.193.1
DNS1=8.8.8.8
DNS2=8.8.4.4

First of all, make a clean centOS 7 installation and update it with:

yum update -y

Reboot

shutdown -rf now

Done this, you will need to install the bind packages and configure some files.

yum install bind bind-utils -y

Than, you will need to create the zones directory (where you’ll place the files regarding your DNS zones ;-)).

mkdir /etc/named/zones

On /etc/named.conf you will need it to be like this (pay attention on piece of text in bold – that’s what you’ll need to adapt in order to fit your needs).

vi /etc/named.conf

Copy the code above, adapt to your needs, paste and save.

Note that what was modified:

The server IP Address: 192.168.192.2
The allow-query session to: any
Add the code block with Google’s DNS Forward Information (You can use your Preferred DNS).

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };

Add the line that will include the file with the zone information itself.

include "/etc/named/named.conf.local";

The final version should looks like this:

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

options {
        listen-on port 53 { 127.0.0.1; 192.168.193.2; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     { any; };

        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        recursion yes;

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };


        dnssec-enable yes;
        dnssec-validation yes;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.root.key";

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
include "/etc/named/named.conf.local";

After that, let’s create the /etc/named/named.conf.local file.

vi /etc/named/named.conf.local

This must the the content (please adapt it to your needs, the lines you need to adjust are in bold).

zone "zlab.com" {
    type master;
    file "/etc/named/zones/zlab.com";
};

zone "193.168.192.in-addr.arpa" {
    type master;
    file "/etc/named/zones/db.192.168.193";  # 192.168.193.0/24 subnet
};

Than, create the zone files, in my case is zlab.com and db.192.168.193.

vi /etc/named/zones/zlab.com

The content should be like this (change according to your needs) :

;
; BIND data file for local loopback interface
;
$TTL    604800
@       IN      SOA     zlab.com. admin.zlab.com. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      ns.zlab.com.
ns.zlab.com.              IN      A       192.168.193.2

Note that this line

ns.zlab.com.              IN      A       192.168.193.2

is the record about your DNS server itself. I’ll need to add the rest of your infrastructure using the same schema, example:

ns.zlab.com.              IN      A       192.168.193.2
ldap.zlab.com.            IN      A       192.168.193.10
w10.zlab.com.             IN      A       192.168.193.20

Save the file and edit the reverse zone file, in my case db.192.168.193 file.

vi /etc/named/zones/db.192.168.193

The content should be like this (change according to your needs) :

;
; BIND reverse data file for local loopback interface
;
$TTL    604800
@       IN      SOA     zlab.com. admin.zlab.com. (
                              2         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      ns.

; also list other computers
10      IN      PTR     ns.zlab.com.           ; 192.168.193.2

Note that this line

10      IN      PTR     ns.zlab.com.           ; 192.168.193.2

is the record about your DNS server itself. I’ll need to add the rest of your infrastructure using the same schema, example:

10      IN      PTR     ns.zlab.com.           ; 192.168.193.2
10      IN      PTR     ldap.zlab.com.         ; 192.168.193.10
10      IN      PTR     w10.zlab.com.          ; 192.168.193.20

Save the file.

Change the server DNS to 127.0.0.1 so your network config should look like this:

IPADDR=192.168.193.2
NETMASK=255.255.255.0
GATEWAY=192.168.193.1
DNS1=127.0.0.1

Restart the bind daemon.

systemctl restart named

Make it enabled in the system (to be enabled after reboot)

systemctl enable named

Set firewall rules:

firewall-cmd --permanent --new-service=named
firewall-cmd --permanent --zone=public --add-port=53/tcp
firewall-cmd --permanent --zone=public --add-port=53/udp
firewall-cmd --reload

Now is the best part, make it work on your infrastructure!

It’s really simple now!

Where you in normal situation would setup Google or your Internet Provider DNS, you set the DNS Server IP Address.

Example for linux centOS machines:

IPADDR=192.168.193.X
NETMASK=255.255.255.0
GATEWAY=192.168.193.1
DNS1=192.168.193.2

You’ll need to adapt it according to your client Operational System.

Hope it helps you.

How to Deploy a Kubernetes Cluster with 3 Worker Nodes on CentOS 7.7 including DashBoard and External Access

Hello Guys,

For this Lab Environment we’ll need 4 Virtual Machines:

1 CentOS VM with 8GB Ram and 15GB Disk with IP 192.168.132.100 for the k8s-master;
1 CentOS VM with 4GB Ram and 15GB Disk with IP 192.168.132.101 for the k8s-worker1;
1 CentOS VM with 4GB Ram and 15GB Disk with IP 192.168.132.102 for the k8s-worker2;
1 CentOS VM with 4GB Ram and 15GB Disk with IP 192.168.132.103 for the k8s-worker3;

The OS installation complete, let’s start installing things on it.

First of all, let’s adjust the IP addresses and Hostnames according to the specs above. Also, it’s really good to update the /etc/hosts in every node including those lines:

192.168.132.100 k8s-master
192.168.132.101 k8s-worker1
192.168.132.102 k8s-worker2
192.168.132.103 k8s-worker3

To adjust Hostname, run the command bellow replacing with hostname with the respective Virtual Machine:

For the Master

# hostnamectl set-hostname 'k8s-master'

For the Worker1

# hostnamectl set-hostname 'k8s-worker1'

For the Worker2

# hostnamectl set-hostname 'k8s-worker2'

For the Worker3

# hostnamectl set-hostname 'k8s-worker3'

Great, IN ALL NODES, we need to run the following steps:

Update the system

# yum update -y

Install the yum-config-manager and add the repo to install docker

# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Configure iptables for Kubernetes

# cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl --system

Add the kubernetes repo needed to find the kubelet, kubeadm and kubectl packages

# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

Set SELinux to Permissive Mode

# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Turn off the swap

# swapoff -a
# sed -i '/swap/d' /etc/fstab

Install Kubernetes and Docker

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes docker-ce docker-ce-cli containerd.io

Start and Enable Docker

# systemctl enable --now docker

Start and Enable Kubernetes

# systemctl enable --now kubelet

Let’s disable the Firewall so we could put things working first, than, correct security (in future post)

# systemctl disable firewalld --now

———————————————————————————————————————————-

Ok, Done those steps in Every Virtual Machine we’ll jump to the k8s-master and run the following steps:

# yum -y install wget lsof
# modprobe br_netfilter
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Restart

# shutdown -rf now

Start the Cluster

# kubeadm init

After started, copy the kubeadm join line for the use in the future on Worker Nodes (pay attention that you need to copy your output line because every build generates its own token. This example above was the kubeadm join line my installation generates and will not work for you.)

# kubeadm join 192.168.132.100:6443 --token 8u9v7h.1nfot2drqnqw8mps \
    --discovery-token-ca-cert-hash sha256:8624e49e1ce94e912ac7c081deabd50196f8526c9a597e0142414204939ff510
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

Let’s install the network using Wave

# export kubever=$(kubectl version | base64 | tr -d '\n')
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Let’s check Nodes and Pods of our Installation. Note that sometimes it takes a while to put everything Ready and Running, you can watch those commands until everything looks great.

# kubectl get nodes
# kubectl get pods --all-namespaces

———————————————————————————————————————————-

Let’s Install the K8s Dashboard

# vi kubernetes-dashboard-deployment.yaml

Add this content:

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kube-system

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kube-system
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kube-system
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kube-system

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-rc7
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kube-system
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kube-system
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

Than Apply the Yaml File

# kubectl apply -f kubernetes-dashboard-deployment.yaml

And check again to see if the Dashboard is Up and Running (note again that the pod might take a while to be up and running, just watch the command to see it working)

# kubectl get pods --all-namespaces

Ensure that we have expose correctly the external port (PortNode) 30001 so it can be possible access the Dashboard over the internet.

# kubectl -n kube-system get services

———————————————————————————————————————————-

Adjust SSL Certificates

# kubectl delete secret kubernetes-dashboard-certs -n kube-system
# mkdir $HOME/certs
# cd $HOME/certs
# openssl genrsa -out dashboard.key 2048
# openssl rsa -in dashboard.key -out dashboard.key
# openssl req -sha256 -new -key dashboard.key -out dashboard.csr -subj '/CN=localhost'
# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system

Reboot the System

# shutdown -rf now

———————————————————————————————————————————-

Now, let’s create a file named adminuser.yaml (so we can login to the DashBoard) and save the content with this:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

Than Apply

# kubectl apply -f adminuser.yaml

And, create a file named permissions.yaml to give permissions to the user we’ve created with the content:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Than Apply

# kubectl apply -f permissions.yaml

Done that, it’s time to login to the DashBoard! But first, we need to collect the Token we’ll need to use in order to access. Executing the command, copy the part that says token, but only the token itself.

# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Copy only the token that will look something like this (again this is just the example my script generates, each installation will generate an unique one, be aware of that):

eyJhbGciOiJSUzI1NiIsImtpZCI6IlJvVEotbUdDUndjbXBxdUJvbU41ekxYOE9TWTdhM1NFR1dSc3g5Ul9Dbk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlzajl3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4MjgyYWI5NS03YTViLTQzOTItYWQwNy0yZTY1MTc2YTgxNjIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.NKWvADh7osIFng0xgNpf2C2FBCsijXZr4HOgHTFVZS_TWMLT3wPt6rXhHnnDTu6HXUWd0era9GgXouemliTJKA-HrE8R7ArW88m1jAckR4xVquTLf0Vr1na_nmBVuMTiW9W55b6cr-hZuWuyI4F–91N43-6DzdMXpesHaur6UFojS5dUPoHybIr7HUkLWq8InV7fON06r6zWHCLQVftTYtyaAqbvlJnqcRNlnbyobsRWryjUXQRZrC8Fu1QwCe9aLVmcBByG1Tp3Ao8sA_W-ue8ISpat6_shLJa5zO9vpJEYOzfkdDFhAy6MigXl4b1T4J2q1bhpuIX82dmLVhboA

———————————————————————————————————————————-

Copied the token, go to your Navigator and type: https://192.168.132.100:30001 choose token, paste the token your script create and you’re inside the DashBoard.

And, the last part, with the Master ready to receive the Nodes, let’s join them!

Log into every Worker node and Run the following command you copied when you run the command kubeadm init at the Masters setup:

# kubeadm join 192.168.132.100:6443 --token 8u9v7h.1nfot2drqnqw8mps \
    --discovery-token-ca-cert-hash sha256:8624e49e1ce94e912ac7c081deabd50196f8526c9a597e0142414204939ff510

At the same time you’re running the command on the Nodes, open the “Nodes” session on the Dashboard to check the workers being joined to the Cluster.

That’s it, next steps will be Deploying some application on this infrastructure and make it Reliable – BUT – this is the Subject for a future Post. Hope you all Enjoy, Comment and Share.

Any Comments and Improvements just let me know.

How to build a NIM Server on AIX 6.1 from the Scratch :: Part 1

Hello Fellas!

Here is a good how to build from the Scratch a NIM Server under AIX 6.1. (The operation for version 7.1 stills the same, anyway).

Well, for this environment i dedicate one vg for this NIM as best practices. So, let’s take a look on our vgs:

# lspv
hdisk0          00047ff1211f84d2                    rootvg          active      
hdisk1          00047ff12252331b                    nimvg           active  

Remembering our OS version

# oslevel -s
6100-08-02-1316

Checking fot the NIM packages already installed by default regarding nim environment:

# lslpp -l | grep nim
bos.sysmgt.nim.client     6.1.8.15  COMMITTED  Network Install Manager -
bos.sysmgt.nim.client     6.1.8.15  COMMITTED  Network Install Manager -

So, with our AIX 6.1 cd0 mounted, let’s install the nim SPOT and nim MASTER packages:

# installp -Ld /dev/cd0 | grep nim
X11.Dt:X11.Dt.helpmin:6.1.2.0::I:T:::::N:AIX CDE Minimum Help Files ::::0:0846:
X11.msg.DE_DE:X11.msg.DE_DE.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - German (UTF)::::0::
X11.msg.EN_US:X11.msg.EN_US.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - U.S. English (UTF)::::0::
X11.msg.FR_FR:X11.msg.FR_FR.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - French (UTF)::::0::
X11.msg.IT_IT:X11.msg.IT_IT.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - Italian (UTF)::::0::
X11.msg.JA_JP:X11.msg.JA_JP.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - Japanese (UTF)::::0::
X11.msg.Ja_JP:X11.msg.Ja_JP.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - Japanese::::0::
X11.msg.de_DE:X11.msg.de_DE.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - German::::0::
X11.msg.en_US:X11.msg.en_US.Dt.helpmin:6.1.0.0::I:T:::::N:AIX CDE Minimum Help Files - U.S. English::::0:0747:
X11.msg.fr_FR:X11.msg.fr_FR.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - French::::0::
X11.msg.it_IT:X11.msg.it_IT.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - Italian::::0::
X11.msg.ja_JP:X11.msg.ja_JP.Dt.helpmin:6.1.4.0::I:T:::::N:AIX CDE Minimum Help Files - Japanese IBM-eucJP::::0::
bos.sysmgt:bos.sysmgt.nim.client:6.1.8.15::I:C:::::N:Network Install Manager - Client Tools ::::0:1316:
bos.sysmgt:bos.sysmgt.nim.master:6.1.8.15::I:T:::::N:Network Install Manager - Master Tools ::::0:1316:
bos.sysmgt:bos.sysmgt.nim.spot:6.1.8.15::I:T:::::N:Network Install Manager - SPOT ::::0:1316:

Let’s install first the nim.spot filesets:

# installp -agXd /dev/cd0 bos.sysmgt.nim.spot
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results... SUCCESSES --------- Filesets listed in this section passed pre-installation verification and will be installed. Selected Filesets ----------------- bos.sysmgt.nim.spot 6.1.8.15 # Network Install Manager - SPOT << End of Success Section >> +-----------------------------------------------------------------------------+ BUILDDATE Verification ... +-----------------------------------------------------------------------------+ Verifying build dates...done FILESET STATISTICS ------------------ 1 Selected to be installed, of which: 1 Passed pre-installation verification ---- 1 Total to be installed +-----------------------------------------------------------------------------+ Installing Software... +-----------------------------------------------------------------------------+ installp: APPLYING software for: bos.sysmgt.nim.spot 6.1.8.15 . . . . . << Copyright notice for bos.sysmgt >> . . . . . . . Licensed Materials - Property of IBM 5765G6200 Copyright International Business Machines Corp. 1993, 2013. All rights reserved. US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. . . . . . << End of copyright notice for bos.sysmgt >>. . . . Finished processing all filesets. (Total time: 16 secs). +-----------------------------------------------------------------------------+ Summaries: +-----------------------------------------------------------------------------+ Installation Summary -------------------- Name Level Part Event Result ------------------------------------------------------------------------------- bos.sysmgt.nim.spot 6.1.8.15 USR APPLY SUCCESS #

Then, we can install the nim.master filesets:

# installp -agXd /dev/cd0 bos.sysmgt.nim.master
+-----------------------------------------------------------------------------+
Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
---------
Filesets listed in this section passed pre-installation verification
and will be installed.

Selected Filesets
-----------------
bos.sysmgt.nim.master 6.1.8.15 # Network Install Manager - Ma...

<< End of Success Section >>

+-----------------------------------------------------------------------------+
BUILDDATE Verification ...
+-----------------------------------------------------------------------------+
Verifying build dates...done
FILESET STATISTICS
------------------
1 Selected to be installed, of which:
1 Passed pre-installation verification
----
1 Total to be installed

+-----------------------------------------------------------------------------+
Installing Software...
+-----------------------------------------------------------------------------+

installp: APPLYING software for:
bos.sysmgt.nim.master 6.1.8.15

. . . . . << Copyright notice for bos.sysmgt >> . . . . . . .
Licensed Materials - Property of IBM

5765G6200
Copyright International Business Machines Corp. 1993, 2013.

All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for bos.sysmgt >>. . . .

Successfully updated the Kernel Authorization Table.
Successfully updated the Kernel Role Table.
Successfully updated the Kernel Command Table.
Successfully updated the Kernel Device Table.
Successfully updated the Kernel Object Domain Table.
Successfully updated the Kernel Domains Table.
Finished processing all filesets. (Total time: 42 secs).

+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+

Installation Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
bos.sysmgt.nim.master 6.1.8.15 USR APPLY SUCCESS

After this 2 operations, we may have the filesets needed for the nim server configuration:

# lslpp -l | grep nim
bos.sysmgt.nim.client     6.1.8.15  COMMITTED  Network Install Manager -
bos.sysmgt.nim.master     6.1.8.15  COMMITTED  Network Install Manager -
bos.sysmgt.nim.spot       6.1.8.15  COMMITTED  Network Install Manager - SPOT
bos.sysmgt.nim.client     6.1.8.15  COMMITTED  Network Install Manager -

Starting the setup (using smit – easier):


# smit nim

Image 1.

  1. Select: Configure the NIM Environment

Image 2.

  1. Select: Configure a Basic NIM Environment (Easy Startup)

Image 3.

  1. Primary Network Interface for the NIM Master: Choose the network card used for the NIM network connection;
  2. Input device for installation images: In our case, i choose cd0 as the mounted AIX 6.1 ISO from the VIOS;
  3. LPP SOURCE Name: I Choose AIX61DISK1LPP to inform that this is the Disk 1 for the AIX 6.1 Installation media;
  4. Filesystem SIZE (MB): 4000 = 4GB (The size of the ISO file);
  5. VOLUME GROUP for new filesystem: nimvg (as the vg created for holding the files);

Image 4.

  1. SPOT Name: I choose AIX61DISK1SPOT to identify what the spot is about;
  2. Filesystem SIZE (MB): 650, as the space for swap files during installation/processing. The minimum is 500M.
  3. VOLUME GROUP for new filesystem: Again the nimvg as the vg defined for this use.

Image 5.

  1. Remove all newly added NIM definitions and filesystems if any part of this operation fails?: Yes, in case of fails, bring everything to the same place.

Image 6.

After all setting being inputed, hit enter, and start the Resource creation. (Be carefully, umount the /dev/cd0 to avoid mounting problems if using this drive as source for this LPP).

Image 7.

(After some time… ) Installation Finish.

So, next part we will talk about creating new LPP/SPOT resources. It’ll be available soon, stay tunned!

Script to show Total, Free and Used Memory on AIX

Just Copy and Paste:

(memory=`prtconf -m | awk 'BEGIN {FS=" "} {print $3/1024}'`
usedmem=`svmon -G | grep memory | awk 'BEGIN {FS=" "} {print $3/256/1024}'`
freemem=`echo $memory-$usedmem | bc -l`
clear
echo
echo "Memory Results:"
echo "----------------------"
echo
echo "Avai Mem: $memory GB"
echo "Free Mem: $freemem GB"
echo "Used Mem: $usedmem GB"
echo
echo)

Result:

Memory Results:
----------------------

Avai Mem: 2 GB
Free Mem: 0.69649 GB
Used Mem: 1.30351 GB

Have something different to Share? Let me know!! Joao Bosco Cortez Filho

Awesome Command to show top 15 processes using memory on AIX

Need to know who is on the top processes using memory on AIX?, here is:

Command:

# svmon -Pt15 | perl -e 'while(<>){print if($.==2||$&&&!$s++);$.=0 if(/^-+$/)}'

Output:

# svmon -Pt15 | perl -e 'while(<>){print if($.==2||$&&&!$s++);$.=0 if(/^-+$/)}'
-------------------------------------------------------------------------------
     Pid Command          Inuse      Pin     Pgsp  Virtual 64-bit Mthrd  16MB
 5636288 rmcd             75559    69009      384    80108      N     Y     N
 6094908 java             43698    14846     4153    51997      N     Y     N
 3801272 java             41497    14821     5718    51391      N     Y     N
 3866846 cimserver        24446    14811     4472    33101      N     Y     N
 5112018 cimprovagt       24249    14775      572    29118      N     Y     N
 6160420 cimlistener      22622    14775     1396    28326      N     Y     N
 6553824 rpc.mountd       21649    14774      401    26376      N     Y     N
 7471188 sshd             21581    14772      384    26269      N     N     N
 3473552 tier1slp         21564    14772     1881    27692      N     N     N
 4456702 IBM.MgmtDomai    21550    14781      440    26383      N     Y     N
 3342532 rpc.statd        21541    14775      457    26329      N     Y     N
 3408040 clcomd           21510    14775      509    26356      N     Y     N
 4915204 IBM.DRMd         21507    14793      672    26583      N     Y     N
 5374150 topasrec         21478    14772      396    26186      N     N     N
 6750398 ksh              21472    14772      384    26096      N     N     N

Any Questions? Please Write Me! Joao Bosco Cortez Filho