Quick vsftp install and configuration

4 12 2016

This is the simple method for install and have a basic configuration for log into the ftp server with the local users of the system:
apt-get install vfstp
vi /etc/vsftpd.conf
Uncomment the following lines:
local_enable=YES
write_enable=YES

Restart the service and enjoy!
/etc/init.d/vsftpd restart





USB sound card 0d8c:013c C-Media Electronics, Inc. CM108 not work

20 10 2016

If you have the following sound card:

ID 0d8c:013c C-Media Electronics, Inc. CM108 Audio Controller

And you don’t want to use any other audio device excep this card, the solution is to blacklist all the modules listed here except the snd_usb_audio:

cat /proc/asound/modules
0 snd_bcm2835
1 snd_usb_audio
2 snd_hda_intel

Create teh following file with the other non-usb sound modules:

/etc/modprobe.d/blacklist.conf
blacklist snd_hda_intel
blacklist snd_bcm2835

And restart. After googling and didn’t fall into the right solution, except this one.

References:
http://raspberrypi.stackexchange.com/questions/40831/how-do-i-configure-my-sound-for-jasper-on-raspbian-jessie
http://alsa.opensrc.org/MultipleCards





Testing virtual interface inside a multihost VxLAN one-to-one (unicast) or one-to-multi (multicast)

12 09 2016

– First of all, enable ip forward:

echo 1 > /proc/sys/net/ipv4/ip_forward

– Set up the VxLAN:

For unicast, define the local and remote IPs:
ip link add vxlan1 type vxlan id 42 remote 10.1.1.1 local 10.1.1.2 dev eth0 dstport 4789

For multicast, define the IP for the multicast group:
ip link add vxlan1 type vxlan id 42 group 239.1.1.1 dev eth0 dstport 4789

– Bring up the VxLAN:

ip link set up dev vxlan1

– Create the bridge and bring it up:

ip link add name br0 type bridge
ip link set br0 up

– Create the virtual ethernet interface, a veth pair, and bring up one side:

ip link add veth0 type veth peer name veth1
ip link set veth0 up

– Create the namespace and include the other side of the veth pair:

ip netns add blue
ip link set veth1 netns blue

– Set an IP address to the veth1 and bring it up, the same for lo:

ip netns exec blue ifconfig veth1 192.168.1.1/24 up
ip netns exec blue ip link set dev lo up

– Include the VxLAN and the veth interface into the bridge:

ip link set vxlan1 master br0
ip link set veth0 master br0

– If you choose the unicast way, repeat this process in the other hosts changing the “remote” and “local” IPs in the set up VxLAN step and the veth IP address when set up the veth1 address (in this example 192.168.1.3). You can only set up a one-to-one configuration.

– If you choose multicast, repeat the same process in each host changing only the veth IP address when set up the veth1 address (in this example 192.168.1.3). More than one host can register into the multicast group without problem.

– Test conectivity between different hosts:

Host1:
ip netns exec blue ping 192.168.1.3
PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.
64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.313 ms

ip netns exec blue traceroute 192.168.1.3
traceroute to 192.168.1.3 (192.168.1.3), 30 hops max, 60 byte packets
1 192.168.1.3 (192.168.1.3) 0.329 ms 0.273 ms 0.253 ms

Host2:
ip netns exec green ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.234 ms

ip netns exec green traceroute 192.168.1.1
traceroute to 192.168.1.1 (192.168.1.1), 30 hops max, 60 byte packets
1 192.168.1.1 (192.168.1.1) 0.256 ms 0.230 ms 0.209 ms

– View VxLAN information:

bridge fdb show dev vxlan1
00:00:00:00:00:00 dst 10.1.1.1 via eth0 self permanent
36:33:16:6a:4f:8b dst 10.1.1.1 self
36:33:16:6a:4f:8b vlan 0 master br0
b2:1f:24:b9:1a:39 vlan 0 master br0 permanen





Testing virtual interface inside an open virtual swich distributed in two hosts with a GRE / VxLAN tunnel

12 09 2016

– First of all, enable ip forward:

echo 1 > /proc/sys/net/ipv4/ip_forward

– Install open virtual swich:

apt-get install openvswitch-switch

– Create the virtual ethernet interface, a veth pair, and bring one side up:

ip link add veth0 type veth peer name veth1
ip link set veth0 up

– Create the bridge inside the ovs (open virtual swich):

ovs-vsctl add-br br0

– Include the veth0 into the bridge:

ovs-vsctl add-port br0 veth0

– Create the gre tunnel from this hots to the remote, change “remote_ip” to the remote ip addres. Take one option, GRE or VxLAN:

Using GRE:
ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre options:remote_ip=10.10.10.1Using VxLAN:
ovs-vsctl add-port br10 vxlan0 -- set interface vxlan0 type=vxlan options:remote_ip=10.10.10.1

– Create the namespace and include the other side of the veth interface:

ip netns add green
ip link set veth1 netns green

– Assign an IP address to the interface inside the namespace, and bring it up, the same for lo:

ip netns exec green ifconfig veth1 192.168.1.1/24 up
ip netns exec green ip link set dev lo up

– Repeat this process in the remote hosts assigning the correct “remote_ip” and a different “veth1” ip (in this example 192.168.1.2).

– Test conectivity between the different hosts:

Host 1:
ip netns exec green ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.597 m

ip netns exec green traceroute 192.168.1.1
traceroute to 192.168.1.1 (192.168.1.1), 30 hops max, 60 byte packets
1 * 192.168.1.1 (192.168.1.1) 0.635 ms *

Host 2:
ip netns exec green ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.408 ms

ip netns exec green traceroute 192.168.1.2
traceroute to 192.168.1.2 (192.168.1.2), 30 hops max, 60 byte packets
1 192.168.1.2 (192.168.1.2) 1.287 ms 1.071 ms 1.056 ms

– The command “ovs-vsctl show” report the status of the ovs:

GRE:
ce14f19a-978d-47f9-83a9-f00f2f1655f4
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.1.1"}
Port "veth0"
Interface "veth0"
ovs_version: "2.3.0"

VxLAN:
ce14f19a-978d-47f9-83a9-f00f2f1655f4
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vxlan0"
Interface "vxlan0"
type: vxlan
options: {remote_ip="192.168.1.1"}
Port "veth0"
Interface "veth0"
ovs_version: "2.3.0"





Testing virtual interface inside a namespace

12 09 2016

One virtual interface (veth0/1) into one namespace (blue) with internet conectivity

– First of all, enable ip forward:

echo 1 > /proc/sys/net/ipv4/ip_forward

– Create the virtual ethernet interface, a veth pair, and bring one side up:

ip link add veth0 type veth peer name veth1
ip link set veth0 up

– Create the network namespace, called blue, in which the other side of the veth is going to reside:

ip netns add blue

– Put the corresponding veth side, veth1, into the namespace. Take into account that the other side, veth0, reside in the system namespace:

ip link set veth1 netns blue

– Configure the veth1 with an IP address and bring it up. The command is executed inside the namespace:

ip netns exec blue ifconfig veth1 10.1.1.1/24 up

– Bring up lo interface too for avoid extrange problems:

ip netns exec blue ip link set dev lo up

– Create the bridge, called br0, and bring it up:

ip link add name br0 type bridge
ip link set br0 up

– Assign an IP address to the bridge interface for gain the level3 behaviour, if not, only works as level2:

ip addr add 10.1.1.254/24 dev br0

– Include the veth0, which is outside the namespace, into the bridge:

ip link set veth0 master br0

– Add a default route for the namespace inside it:

ip netns exec blue ip route add default via 10.1.1.254

– Add the iptables rules for allow NAT in the host system:

iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

– Test 1. Ping and traceroute from the host to the namespace:

ping -c1 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.063 ms

traceroute 10.1.1.1
traceroute to 10.1.1.1 (10.1.1.1), 30 hops max, 60 byte packets
1 10.1.1.1 (10.1.1.1) 0.051 ms 0.012 ms 0.010 ms

– Test 2. Ping and traceroute from the namespace to the bridge:

ip netns exec blue ping -c1 10.1.1.254
PING 10.1.1.254 (10.1.1.254) 56(84) bytes of data.
64 bytes from 10.1.1.254: icmp_seq=1 ttl=64 time=0.038 ms

ip netns exec blue traceroute 10.1.1.254
traceroute to 10.1.1.254 (10.1.1.254), 30 hops max, 60 byte packets
1 10.1.1.254 (10.1.1.254) 0.059 ms 0.013 ms 0.009 ms

– Test 3. Ping and traceroute from the namespace to internet:

ip netns exec blue ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=0.838 ms

ip netns exec blue traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.1.1.254 (10.1.1.254) 0.050 ms 0.012 ms 0.008 ms
...
9 google-public-dns-a.google.com (8.8.8.8) 0.884 ms 0.701 ms 0.681 ms

Two virtual interfaces (veth0/1 and veth10/11) into two different namespaces each one (blue and green) using the same subnet with internet conectivity

– Plus the steps done above…

– Create the virtual ethernet interface, and bring it up:

ip link add veth10 type veth peer name veth11
ip link set veth10 up

– Create the network namespace, and include the veth11 interface into it:

ip netns add green
ip link set veth11 netns green

– Include the veth10 into the bridge:

ip link set veth10 master br0

– Configure the veth11 with an IP address, bring it up, the same for lo, and add the default route to the bridge:

ip netns exec green ifconfig veth11 10.1.1.11/24 up
ip netns exec green ip link set dev lo up
ip netns exec green ip route add default via 10.1.1.254

– Test 4. Ping and traceroute from the blue namespace to the green:

ip netns exec blue ping -c1 10.1.1.11
PING 10.1.1.11 (10.1.1.11) 56(84) bytes of data.
64 bytes from 10.1.1.11: icmp_seq=1 ttl=64 time=0.059 ms

ip netns exec blue traceroute 10.1.1.11
traceroute to 10.1.1.11 (10.1.1.11), 30 hops max, 60 byte packets
1 10.1.1.11 (10.1.1.11) 0.055 ms 0.015 ms 0.015 ms

– Test 5. Ping and traceroute from the green namespace to the blue:

ip netns exec green ping -c1 10.1.1.1
PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data.
64 bytes from 10.1.1.1: icmp_seq=1 ttl=64 time=0.068 ms

ip netns exec green traceroute 10.1.1.1
traceroute to 10.1.1.1 (10.1.1.1), 30 hops max, 60 byte packets
1 10.1.1.1 (10.1.1.1) 0.060 ms 0.010 ms 0.008 ms

– Test 6. Ping and traceroute from green namespace to internet:

ip netns exec green ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=0.804 ms

ip netns exec green traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.1.1.254 (10.1.1.254) 0.073 ms 0.010 ms 0.007 ms
...
9 google-public-dns-a.google.com (8.8.8.8) 0.663 ms 0.726 ms 0.680 m





Install VMWare Tools in Debian – 2016

1 09 2016

Since the supoort for the official and propietary VMWare Tools shiped from VMWare are ended in favour of the Open VM Tools, the execution of the “vmware-install.pl” script passed away.

The official document pointed to this change is the “VMware support for Open VM Tools (2073803)”, highlightning from there the following lines:

– VMware recommends using OVT redistributed by operating system vendors.
– VMware fully supports virtual machines that include OVT redistributed by operating system vendors, which is done in collaboration with the OS vendor and OS communities. However, the operating system release must be published as certified by the specific VMware product in the online VMware Compatibility Guide.
– VMware provides assistance to operating system vendors and communities with the integration of open-vm-tools with OS releases.
– VMware fully supports virtual appliances that include OVT , which is done in collaboration with the virtual appliance vendor.
– VMware does not recommend removing OVT redistributed by operating system vendors.

So, now, installing Open VM Tools is as easier as other package:

# apt-get install open-vm-tools

https://packages.debian.org/search?keywords=open-vm-tools

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2073803





Boot Debian system with EFI Stub kernel

31 08 2016

I try to explain the process of install a Debian system with UEFI and use the EFI Stub property for load the kernel and the ram disk directly from the EFI system firmware, leaving aside “grub-efi” and scratching some seconds at boot time.

– Be sure that UEFI is enabled in your system booting firmware.

– Install Debian following the normal way until the partitioning step.

– UEFI require the following partitioning points:

* Around "500MB" of space
* Bootable flag on
* Partition type "EFI System Partition"

You can select automatic partitioning, and the Debian Installation process will create it automatically or doing the partitioning by hand using the indicated values.

Don’t create this partition under software RAID or LVM, the UEFI won’t read the files under that.

– End with the installation and reboot. If it went correctly, you have a polish running system, booted with grub-uefi, with the UEFI partition mounted under /boot/efi:

/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=utf8,shortname=mixed,errors=remount-ro)

– Now is time to change the boot process into a EFI Stub, copy the kernel and the ram disk into the UEFI partition:

# cp /vmlinuz /initrd.img /boot/efi/EFI/debian/

– Look for your root filesystem UUID (in this example is sda2, the place were the linux is installed):

# blkid /dev/sda2
/dev/sda2: UUID="955548bd-9c77-4893-8633-3a5e7966dfc9" TYPE="ext4" PARTUUID="4cd59271-18a2-4d00-a65a-a106ee030a1a"

– And create an entry into the UEFI firmware for the linux EFIStub (replace the UUID number with your particular reference):

# efibootmgr -c -g -L "Debian (EFI stub)" -l '\EFI\debian\vmlinuz' -u "root=UUID=955548bd-9c77-4893-8633-3a5e7966dfc9 ro quiet rootfstype=ext4 add_efi_memmap initrd=\\EFI\\debian\\initrd.img"

– Check the UEFI information, the new entry is at the bottom:

# efibootmgr -v
BootCurrent: 0004
BootOrder: 0005,0004,0000,0001,0002,0003
Boot0000* EFI Virtual disk (0.0)
Boot0001* EFI VMware Virtual IDE CDROM Drive (IDE 1:0)
Boot0002* EFI Network
Boot0003* EFI Internal Shell (Unsupported option)
Boot0004* debian
Boot0005* Debian (EFI stub)
root@uefi:/boot/efi/EFI/debian# efibootmgr -v
BootCurrent: 0004
BootOrder: 0005,0004,0000,0001,0002,0003
Boot0000* EFI Virtual disk (0.0) ACPI(a0341d0,0)PCI(15,0)PCI(0,0)SCSI(0,0)
Boot0001* EFI VMware Virtual IDE CDROM Drive (IDE 1:0) ACPI(a0341d0,0)PCI(7,1)ATAPI(1,0,0)
Boot0002* EFI Network ACPI(a0341d0,0)PCI(16,0)PCI(0,0)MAC(MAC(005056948a0a,1)
Boot0003* EFI Internal Shell (Unsupported option) MM(b,e1a3000,e42ffff)FvFile(c57ad6b7-0515-40a8-9d21-551652854e37)
Boot0004* debian HD(1,800,ee000,4f3b579c-10cb-44ca-b845-475b2409eaf7)File(\EFI\debian\grubx64.efi)
Boot0005* Debian (EFI stub) HD(1,800,ee000,4f3b579c-10cb-44ca-b845-475b2409eaf7)File(\EFI\debian\vmlinuz)r.o.o.t.=.U.U.I.D.=.9.5.5.5.4.8.b.d.-.9.c.7.7.-.4.8.9.3.-.8.6.3.3.-.3.a.5.e.7.9.6.6.d.f.c.9. .r.o. .q.u.i.e.t. .r.o.o.t.f.s.t.y.p.e.=.e.x.t.4. .a.d.d._.e.f.i._.m.e.m.m.a.p. .i.n.i.t.r.d.=.\.E.F.I.\.d.e.b.i.a.n.\.i.n.i.t.r.d...i.m.g.
Boot0007* Debian HD(1,800,ee000,4f3b579c-10cb-44ca-b845-475b2409eaf7)File(\EFI\debian\grubx64.efi

– Look close to the information reported, the BootOrder have the new entry listed first, in the next boot the system will take it:

BootCurrent: 0004
BootOrder: 0005,0004,0000,0001,0002,0003

– Now, reboot and check the BootCurrent again:

BootCurrent: 0005
BootOrder: 0005,0004,0000,0001,0002,0003

– Basically it’s done. But for update the kernel and the ram disk automatically when an upgraded is installed, or removed, create the following files and make them executable:

# cat > /etc/kernel/postinst.d/zz-update-efistub << EOF
#!/bin/sh
echo "Updating EFI boot files..."
cp /vmlinuz /initrd.img /boot/efi/EFI/debian/
EOF

# chmod +x /etc/kernel/postinst.d/zz-update-efistub# cp /etc/kernel/postinst.d/zz-update-efistub /etc/kernel/postrm.d/zz-update-efistub

As a note, is it possible to substitute this script with systemd units. Look into the Arch linux wiki for more info.

– Thats all.

 

 

— But if you have some problems:

– If you need to revert the chage, for example, to keep boot0004 first again, execute:

# efibootmgr -o 0004,0005,0000,0001,0002,0003

– If you lost the Debian entry created at installation time, recrete it with this:

# efibootmgr -c -g -L "Debian" -l '\EFI\debian\grubx64.efi'

– If you have problems with EFI variables, be sure that you have installed “efivar” and “efivars” kernel module loaded (modproble efivars).

 

More info:
https://wiki.debian.org/EFIStub
https://www.happyassassin.net/2014/01/25/uefi-boot-how-does-that-actually-work-then/
https://wiki.archlinux.org/index.php/EFISTUB
http://wiki.bitbinary.com/index.php/Debian_Wheezy_EFI_Stub