tag:blogger.com,1999:blog-36344476934789824172024-03-28T10:15:01.393-07:00Ninjix's BlogThoughts, ideas, tutorials, howtos and information.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.comBlogger24125tag:blogger.com,1999:blog-3634447693478982417.post-87687865419109865082013-09-03T08:10:00.004-07:002013-09-12T11:34:29.832-07:00Port forward FQDN website requests on Active Directory domain controllersI don't like to use (.local etc) for Active Directory domains. There are numerous problems with this and Microsoft stopped recommending it years ago. The problem for those of us who have moved to using sites without the oldschool www hostname is that AD requires the A records of the FQDN domain point to the domain controllers.<br />
<br />
There are a number of ways to solve this IT headache that boil down to leveraging the servers or the network.<br />
<br />
Thanks like: <br />
<ul>
<li>Install IIS on the DCs - A heavy handed approach and not recommended.</li>
<li>Perform some network trickery to intercept and forward port 80/443 </li>
<li>Use multiple DNS servers (inside, outside, etc)</li>
</ul>
The least complicated way I have found is to use the port forwarding capabilities of Windows 2008 R2. This way you don't have to twist standard network services with an additional layer of complexity. <br />
<br />
On Linux, I'd use iptables to redirect the HTTP and HTTPS ports like this:<br />
<span style="font-size: small;"><br /></span>
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">iptables -I FORWARD -p tcp -d 192.168.1.31 --dport 80 -j ACCEPT</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">iptables -I FORWARD -p tcp -d 192.168.1.31 --dport 443 -j ACCEPT </span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.1.31:80 </span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 192.168.1.31:443 </span></span><br />
<br />
From the command line on Windows 2008 R2, you can do the same using the netsh cli.<br />
<br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">netsh interface portproxy add v4tov4 listenport=80 listenaddress=192.168.1.11 connectport=80 connectaddress=192.168.1.31</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">netsh interface portproxy add v4tov4 listenport=443 listenaddress=192.168.1.11 connectport=443 connectaddress=192.168.1.31</span></span><br />
<br />
<span style="font-family: inherit;">Now any browser requests using the FQDN root will be automatically forwarded through an AD controller. No extra software need be installed.</span><br />
<br />
<br />
<span style="font-family: inherit;">My thanks to Rick Wargo for sharing his example of <a href="http://www.rickwargo.com/2011/01/08/port-forwarding-port-mapping-on-windows-server-2008-r2/" target="_blank">port forwarding on Windows 2008 R2</a>.</span><br />
<br />
Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com4tag:blogger.com,1999:blog-3634447693478982417.post-8534518554123896952013-03-27T11:00:00.002-07:002013-03-27T11:00:39.654-07:00<p>I tried to execute a dladm set-linkprop command on a Nexenta RSF-1 HA cluster and received a "link busy" error. Took a minute for me to remember that the Solaris family requires you to unplumb and interface before you administer its persistent properties.</p>
<p>The example below shows how to change the MTU settings using plumb and dladm commands. This allows the RSF-1 controlled interfaces and VIPs to use jumbo frames.</p>
<pre>
ifconfig ixgbe1 unplumb
ifconfig ixgbe0 unplumb
dladm set-linkprop -p mtu=9000 ixgbe0
dladm set-linkprop -p mtu=9000 ixgbe1
ifconfig ixgbe0 plumb
ifconfig ixgbe1 plumb
</pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-77982903505927656252013-03-21T07:26:00.006-07:002013-03-21T07:27:21.097-07:00Dell PowerConnect Serial Console on LinuxHere's how to setup <i>minicom</i> on Linux to talk to a Dell PowerConnect.<br />
<br />
Install minicom<br />
<br />
<pre>sudo apt-get install minicom</pre>
<br />
Get your host's serial port.<br />
<br />
<pre>dmesg | grep --color ttyS</pre>
<br />
Example output:<br />
<br />
<pre>dmesg | grep --color ttyS
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
</pre>
<br />
If you don't see anything listed, check your BIOS and make sure the serial port is enabled.<br />
<br />
Now configure minicom.<br />
<br />
<pre>minicom -s</pre>
<br />
Select "Serial port setup" and configure your settings as follows:<br />
<br />
<pre> +-----------------------------------------------------------------------+
| A - Serial Device : /dev/ttyS0 |
| B - Lockfile Location : /var/lock |
| C - Callin Program : |
| D - Callout Program : |
| E - Bps/Par/Bits : 9600 8N1 |
| F - Hardware Flow Control : No |
| G - Software Flow Control : No |
| |
| Change which setting? |
+-----------------------------------------------------------------------+
</pre>
<br />
<br />
Now you can either save these as <i>dfl</i> (default ) or something like <i>dell_powerconnect</i>.<br />
<br />
Use your Dell setup like this.<br />
<br />
<pre>minicom dell_powerconnect</pre>
<br />Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com3tag:blogger.com,1999:blog-3634447693478982417.post-85825421755127202042013-03-21T07:00:00.001-07:002013-03-21T07:00:24.645-07:00I have a alot of servers that have padded numerics in their names. Here's a quick way to issue SSH commands to all of them. The key is to use good old <i>printf</i><br />
<br />
<br />
<pre>for i in {1..10}; do ssh cloud-host-$(printf "%02d" $i) iscsiadm -m node -T iqn.2004-04.com.megastorage:hyper-zfs-serv:iscsi.zabbix.c4c655 -u; done
</pre>
Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-26905934417657051172013-03-21T06:27:00.002-07:002013-03-21T06:30:10.777-07:00The ccze utility is your friend for reading squid proxy logs. It's a nice colorizer and performs timestamp conversions with the -C argument.<br />
<br />
<pre>sudo tail -f /var/log/squid-deb-proxy/access.log|ccze -CA
</pre>
Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-68351568847081274902011-11-02T10:12:00.000-07:002011-11-02T10:12:43.905-07:00A Formal Introduction to The Ubuntu Orchestra Project<a href="http://blog.dustinkirkland.com/2011/08/formal-introduction-to-ubuntu-orchestra.html">A Formal Introduction to The Ubuntu Orchestra Project</a>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-10449368596665558372011-02-24T08:56:00.000-08:002011-02-24T08:56:08.180-08:00Force Replication Between All Active Directory ServersOccasionally, I have to troubleshoot Active Directory issues between branch offices and I can never remember all of the resync arguments for the repadmin.exe command. So I'm posting it here.<br />
<br />
<pre>repadmin /syncall /A /e /P</pre><br />
This will force the executing DC to sync with all NC's known to it.<br />
<br />
You should see something like this * number of NC in your domain:<br />
<br />
<pre>Syncing all NC's held on ATLAS.
Syncing partition: DC=ForestDnsZones,DC=my,DC=corp,DC=com
CALLBACK MESSAGE: The following replication is in progress:
From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
To : cee785b6-01fe-490c-8e50-5199841a1b58._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication is in progress:
From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
To : 62aa2e39-9c52-4eef-a789-f201350c0b02._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication completed successfully:
From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
To : cee785b6-01fe-490c-8e50-5199841a1b58._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication completed successfully:
From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
To : 62aa2e39-9c52-4eef-a789-f201350c0b02._msdcs.my.corp.com
CALLBACK MESSAGE: SyncAll Finished.
SyncAll terminated with no errors.
</pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-78326882927990144962011-02-24T08:31:00.000-08:002011-02-24T08:39:10.996-08:00How to automatically connect Windows 7 or 2008 R2 VPN on start upDo you have a Windows 7 or 2008 R2 machine that needs to automatically connect to a VPN? Here are some instructions on configuring the Task Scheduler to do this for you.<br />
<br />
My thanks to <a href="http://social.answers.microsoft.com/Profile/en-US/?user=RpCahoon&referrer=http%3a%2f%2fsocial.answers.microsoft.com%2fForums%2fen-US%2fw7network%2fthread%2f65d5bbd3-f946-4755-9ac9-943651e0e556&rh=IxW6nLyvH89fbfNsOP81SKdX6fJpFVy0NKdDglC6lzg%3d&sp=forums">RpCahoon</a> for providing his helpful <a href="http://social.answers.microsoft.com/Forums/en-US/w7network/thread/65d5bbd3-f946-4755-9ac9-943651e0e556">post</a> on Microsoft's Social Answers site. I'm also giving Microsoft a nod for doing such thorough job with the modern Task Scheduler.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Instructions</span><br />
<ol><li>Open <b>Task Scheduler</b><br />
Start > All Programs > Accessories > System Tools > Task Scheduler</li>
<li>Click "Create Task" in the <b>Actions</b> pane on the right</li>
<li>General Tab</li>
<ol><li>Provide a logical name for the task like "Auto VPN"</li>
<li>Switch the running task mode to <b>Run whether user is logged on or not</b></li>
<li><b>Enable</b> the <b>Run with highest privileges</b> option</li>
<li>Change the "Configure for:" drop-down to <b>Windows 7, Windows Server 2008 R2</b></li>
</ol><li>Triggers Tab</li>
<ol><li><b>Click</b> the "New..." button</li>
<li>Change "Begin the task:" to <b>At start up</b></li>
<li>(Optional) Enable "Delay task for" and set to 5 minutes. This give the machine a chance to idle down before launching the VPN.</li>
</ol><li>Actions Tab</li>
<ol><li>Click the "New..." button</li>
<li>Enter <i>c:\windows\system32\rasdial.exe</i> in the "Program/script:" field. You can also browse to it if you don't want to type it or your default Windows install directory is different.</li>
<li><b>Type</b> the connection name in the "Add arguments" field. The <i>rasdial.exe</i> requires you wrap the connection name in quotes if it has spaces. You may also need to append the connection's <i>username</i> and <i>password</i> if they are required.</li>
</ol><li>Conditions Tab</li>
<ol><li><b>Un-check all of the options</b> on the conditions tab.</li>
</ol><li>Settings Tab</li>
<ol><li>(Optional) enable "If the task fails, restart every:" and set to an appropriate value. I set mine to 1 hour in case there is a problem on the VPN server's end. </li>
<li>(Optional) set the "Attempt to restart up to:" value to an acceptable number. My default is 72 times at a 1 hour interval. This covers long weekend.</li>
</ol><li>Save the new task</li>
</ol>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com294tag:blogger.com,1999:blog-3634447693478982417.post-24036379401586589772011-02-18T13:14:00.000-08:002011-02-18T13:17:28.929-08:00Get those FSYNC numbers up on your ZFS poolFor the last week, I've been trying to figure out why our 10 drive ZFS zpool has been delivering such lousy NFS performance Proxmox KVM cluster. <br />
<br />
Here's what pveperf was returning:<br />
<pre>pveperf /mnt/pve/kvm-images/
CPU BOGOMIPS: 76608.87
REGEX/SECOND: 896132
HD SIZE: 7977.14 GB (xxx.xxx.xxx.xxx:/volumes/vol0/kvm-images)
FSYNCS/SECOND: 23.15
DNS EXT: 58.84 ms
DNS INT: 1.50 ms (my.company.com)
</pre><br />
The zpool looked like this:<br />
<pre>zpool status vol0
pool: vol0
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
vol0 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0t5000C50010377B5Bd0 ONLINE 0 0 0
c0t5000C5001037C317d0 ONLINE 0 0 0
c0t5000C5001037EED7d0 ONLINE 0 0 0
c0t5000C50010381737d0 ONLINE 0 0 0
c0t5000C50010381BBBd0 ONLINE 0 0 0
c0t5000C50010382777d0 ONLINE 0 0 0
c0t5000C5001038291Fd0 ONLINE 0 0 0
c0t5000C500103870A3d0 ONLINE 0 0 0
c0t5000C500103871C3d0 ONLINE 0 0 0
c0t5000C500103924E3d0 ONLINE 0 0 0
c0t5000C500103941F7d0 ONLINE 0 0 0
cache
c0t50015179591D9AEFd0 ONLINE 0 0 0
c0t50015179591DACA1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
spares
c0t5000C50010395057d0 AVAIL
errors: No known data errors
</pre><br />
Raw write speed wasn't a problem. Tests of copying DVD iso files were supper fast over the 10G network backbone. But the performance of creating new files and folders really hurt. This was very apparent when I started using bonnie++ on the NFS shares from the Proxmox nodes. Bonnie++ zipped along until it started its "Create files..." tests. The Linux client would practically lock up. <br />
<br />
So a little Goolge ZFS keyword searching later, I came across Joe Little's blog post, <a href="http://jmlittle.blogspot.com/2010/03/zfs-log-devices-review-of-ddrdrive-x1.html">ZFS Log Devices: A Review of the DDRdrive X1</a>. This got me thinking about my zpool setup. Looking at the configuration again, I realized that I'd made a mistake and added the second <a href="http://www.amazon.com/Intel-Mainstream-Solid-State-Drive/dp/B001F4YIYY/ref=cm_cr_pr_product_top">Intel X25M SSD</a> to the cache pool instead of the log pool. :)<br />
<br />
Thanks to ZFS awesomeness it was real easy to pull the SSD out of the cache and designate it as part of the log pool. No down time for the production system and no wasted weird weekend hours staring at glowing terminal console. <br />
<br />
Oh man, did that make a difference in performance. <br />
<br />
Here's what the reconfigured vol0 zpool looks like:<br />
<pre>zpool status vol0
pool: vol0
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
vol0 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0t5000C50010377B5Bd0 ONLINE 0 0 0
c0t5000C5001037C317d0 ONLINE 0 0 0
c0t5000C5001037EED7d0 ONLINE 0 0 0
c0t5000C50010381737d0 ONLINE 0 0 0
c0t5000C50010381BBBd0 ONLINE 0 0 0
c0t5000C50010382777d0 ONLINE 0 0 0
c0t5000C5001038291Fd0 ONLINE 0 0 0
c0t5000C500103870A3d0 ONLINE 0 0 0
c0t5000C500103871C3d0 ONLINE 0 0 0
c0t5000C500103924E3d0 ONLINE 0 0 0
c0t5000C500103941F7d0 ONLINE 0 0 0
logs
c1t2d0 ONLINE 0 0 0
cache
c0t50015179591D9AEFd0 ONLINE 0 0 0
c0t50015179591DACA1d0 ONLINE 0 0 0
spares
c0t5000C50010395057d0 AVAIL
errors: No known data errors
</pre><br />
Now ZFS can properly feed all of the Linux FSYNC disk requests. Check out the Proxmox performance test improvements. <br />
<br />
<pre>pveperf /mnt/pve/kvm-images/
CPU BOGOMIPS: 76608.87
REGEX/SECOND: 896132
HD SIZE: 7977.14 GB (xxx.xxx.xxx.xxx:/volumes/vol0/kvm-images)
FSYNCS/SECOND: 1403.21
DNS EXT: 58.84 ms
DNS INT: 1.50 ms (my.company.com)
</pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com1tag:blogger.com,1999:blog-3634447693478982417.post-29315076606775296302011-02-18T11:11:00.000-08:002011-02-18T11:14:28.913-08:00Get your CentOS 5.5 mouse to behave as Linux KVM guestJust spent 30 minutes trying to figure out why CentOS 5.5 wasn't playing nice with QEMU/KVM's USB tablet emulator.<br />
<br />
All you need to do is edit the xorg.conf old school style. My thanks to <a href="http://www.linuxquestions.org/questions/linux-virtualization-90/kvm-mouse-under-windows-guest-performs-way-better-than-under-centos-guest-798091/#post3916357">dyasny</a> for posting his xorg.conf code snipit.<br />
<br />
Here's a copy of my working configuration.<br />
<pre># Xorg configuration created by pyxf86config
Section "ServerLayout"
Identifier "Default Layout"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Tablet" "SendCoreEvents"
InputDevice "Mouse" "CorePointer"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
EndSection
Section "InputDevice"
Identifier "Mouse"
Driver "void"
#Option "Device" "/dev/input/mice"
#Option "Emulate3Buttons" "yes"
EndSection
Section "InputDevice"
Identifier "Tablet"
Driver "evdev"
Option "Device" "/dev/input/event2"
Option "CorePointer" "true"
Option "Name" "Adomax QEMU USB Tablet"
EndSection
Section "Device"
Identifier "Videocard0"
Driver "cirrus"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Videocard0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
</pre><br />
Hope this helps.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com1tag:blogger.com,1999:blog-3634447693478982417.post-16975382579472507382011-02-01T13:33:00.000-08:002011-02-01T13:35:47.733-08:00Use Clonezilla to image KVM VirtIO disksI'm always seeking to squeeze more speed out of common administrator tasks like disk imaging and P2V conversions. Today I tried using my favorite FOSS cloning software, Clonezilla, to restore an image to a KVM running VirtIO disks. What I found was that the current stable release (20110113-maverick) doesn't recognize VirtIO's /dev/vh[a,b,c...] disk naming syntax. You get used to this working with KVM and I'm still on the fence about VirtIO's name convention verses the more common /dev/sd[a,b,c...] method.<br />
<br />
Luckily, another Clonezilla user already submitted a patch for VirtIO drives back in December. It should make it into a future stable release in a few months.<br />
<br />
<a href="http://sourceforge.net/tracker/index.php?func=detail&aid=3112544&group_id=115473&atid=671650">http://sourceforge.net/tracker/index.php?func=detail&aid=3112544&group_id=115473&atid=671650</a><br />
<br />
I was in rush to get a P2V complete, so I used a quick <i>sed</i> onliner to modify stable Clonezilla's Perl scripts to recognize the /dev/vda disk. You'll need to drop into the shell mode to execute this.<br />
<br />
<pre>sudo sed -i '/\[hs\]/\[vhs\]/' /opt/drbl/sbin/ocs-*
</pre><br />
Keep in mind that these changes will be lost if your booting from a Live CDROM.<br />
<br />
Using the VirtIO disk drivers improve the disk imaging throughput for my machine by about 15 percent. Also, don't forget to preload the VirtIO drivers on a Windows machine before imaging and restoring. Otherwise you'll get BSOD on boot.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com1tag:blogger.com,1999:blog-3634447693478982417.post-64310795923925546752011-01-19T20:25:00.000-08:002011-01-19T20:56:51.328-08:00Puppet Module For Centrify Express [Reloaded]I've expanded on my previous simple Puppet module for Centrify Express based on the <a href="http://ninjix.blogspot.com/2011/01/puppet-manifest-for-centrify-express-on.html#comments">helpful advice</a> I received from David McNeely at Centrify. This latest version of my module does not expose domain username or passwords. It requires you to pre-create them from a machine already running Centrify Express as a domain member.<br />
<br />
You can pre-create the account just before you sign the puppet client's certificate.<br />
<pre>sudo adjoin -w -P -u <username> -n <new-hostname> your.domain.net
sudo puppetca -s new-hostname.your.domain.net
</new-hostname></username></pre><br />
Download the latest code from GitHub. <a href="https://github.com/ninjix/puppet-centrifydc">puppet-centrify</a> <br />
<br />
<pre>git clone git://github.com/ninjix/puppet-centrifydc.git
</pre><br />
The new version of the module has the following features:<br />
<ul><li>Installs the Centrify Express Ubuntu package</li>
<li>Automatically attempts to join the machine to the domain after install the apt package</li>
<li>Registers the machine name in Active Directory DNS</li>
<li>Restricts logins on Ubuntu servers to the "Domain Admins" user group</li>
<li>Allows additional logins for users and groups to be granted access</li>
</ul><div>Note: Make sure you enable the Canonical partner repository.</div><pre>deb http://archive.canonical.com/ubuntu lucid partner
</pre><br />
Here are some examples of how you can configure your nodes using this module.<br />
<pre>node 'deimos',
'phobos' inherits default {
$domain = "my.lab.net"
include centrifydc
}
</pre>This is a basic method which provides the domain. The "Domain Admins" group will be granted access by default. You can set other defaults by editing the templates.<br />
<br />
<pre>node 'callisto' inherits default {
$domain = "my.lab.net"
groups_allow = ["Astro Group","Physics Team"]
include centrifydc
}
</pre>Example two allows members of the "Astro Group" and "Physics Team" domain groups to login in addition to members of the "Domain Admin" group.<br />
<br />
<pre>node 'ganymede' inherits default {
$domain = "my.lab.net"
users_allow = ["carl.sagan"]
groups_allow = ["Astro Group","Physics Team"]
include centrifydc
}
</pre>The third example is similar to the second but also allows the user "carl.sagan" to login.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com12tag:blogger.com,1999:blog-3634447693478982417.post-85181982916149708452011-01-17T20:24:00.000-08:002011-01-17T20:27:15.575-08:00Use Unison to Synchronize your remote sharesAt the office and around the house, I often like to keep directories synchronized with network shares. Microsoft has provided two-way, remote folder sync for quite a while now. It is also possible to perform on Linux with a nifty utility named Unison.<br />
<br />
Unison allows you to synchronize in both directions and builds on top of the tried and true rsync protocol. It's built to play well with file exchanges between Unix and Windows hosts. It also has a number of options that allow you to fine tune your sync or script the whole operation. There is a GUI version as well.<br />
<br />
You can install it on Debian/Ubuntu with apt-get:<br />
<pre>sudo apt-get install unison unison-gtk
</pre><br />
In my daily use, I typically have several Nautilus .gvfs mounts to various Windows SMB/CIFS shares and SFTP hosts. Unison isn't directly aware of these Nautilus style mounts so I cobbled together this Nautilus script based on some examples I found at http://g-scripts.sourceforge.net. <br />
<br />
<b>Instructions</b><br />
<br />
Copy the script to your ~/.gnome2/nautilus-scripts/ directory with the name unison-sync.sh. <br />
<br />
Set the execute bit on the script.<br />
<br />
Make sure zenity is installed.<br />
<pre>sudo apt-get install zenity
</pre><br />
With Nautilus, connect to a server resource using SMB or SFTP.<br />
<br />
Right click on a remote directory and click scripts>unison-sync.sh.<br />
<br />
A file directory dialog will appear. This allows you to select the local location you want to synchronize with the server.<br />
<br />
Save the name of the Unison preference file.<br />
<br />
Now run Unison from the terminal or the GUI.<br />
<pre>unison pref_name
</pre><br />
<b>Note</b><br />
<br />
My script enable auto approve for non-conflicts to save time. You might want to change that. It also disables permissions since Windows mounts don't support the same types as standard Linux file systems.<br />
<br />
The unison-sync.sh script:<br />
<pre>#!/bin/bash
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301, USA.
#
# author :
# clayton.kramer <at> gmail.com
#
# description :
# Provides a quick way of making Unison preference files from
# Nautilus.
#
# informations :
# - a script for use (only) with Nautilus.
# - to use, copy to your ${HOME}/.gnome2/nautilus-scripts/ directory.
#
# WARNINGS :
# - this script must be executable.
# - package "zenity" must be installed
#
# THANKS :
# This script was heavily sourced from the work of SLK. Having
# Perl regex to parse .gvfs paths was a huge time saver.
#
# CONSTANTS
# some labels used for zenity [en]
z_title="Synchronize Folder"
z_err_gvfs="cannot acces to directory - check gvfs\nEXIT"
z_err_uri="cannot acces to directory - uri not known\nEXIT"
# INIT VARIABLES
# may depends of your system : (current settings for debian, ubuntu)
GVFSMOUNT='/usr/bin/gvfs-mount'
GREP='/bin/grep'
IFCONFIG='/sbin/ifconfig'
KILL='/bin/kill'
LSOF='/usr/bin/lsof'
PERL='/usr/bin/perl'
PYTHON='/usr/bin/python2.5'
SLEEP='/bin/sleep'
ZENITY='/usr/bin/zenity'
# MAIN
export LANG=C
# retrieve the first object selected or the current uri
if [ "$NAUTILUS_SCRIPT_SELECTED_URIS" == "" ] ; then
uri_first_object=`echo -e "$NAUTILUS_SCRIPT_CURRENT_URI" \
| $PERL -ne 'print;exit'`
else
uri_first_object=`echo -e "$NAUTILUS_SCRIPT_SELECTED_URIS" \
| $PERL -ne 'print;exit'`
fi
type_uri=`echo "$uri_first_object" \
| $PERL -pe 's~^(.+?)://.+$~$1~'`
# try to get the full path of the uri (local path or gvfs mount ?)
if [ $type_uri == "file" ] ; then
filepath_object=`echo "$uri_first_object" \
| $PERL -pe '
s~^file://~~;
s~%([0-9A-Fa-f]{2})~chr(hex($1))~eg'`
elif [ $type_uri == "smb" -o $type_uri == "sftp" ] ; then
if [ -x $GVFSMOUNT ] ; then
# host (and share for smb) are matching a directory in ~/.gvfs/
host_share_uri=`echo "$uri_first_object" \
| $PERL -pe '
s~^(smb://.+?/.+?/).*$~$1~;
s~^(sftp://.+?/).*$~$1~;
'`
path_gvfs=`${GVFSMOUNT} -l \
| $GREP "$host_share_uri" \
| $PERL -ne 'print/^.+?:\s(.+?)\s->.+$/'`
# now let's create the local path
path_uri=`echo "$uri_first_object" \
| $PERL -pe '
s~^smb://.+?/.+?/~~;
s~^sftp://.+?/~~;
s~%([0-9A-Fa-f]{2})~chr(hex($1))~eg'`
filepath_object="${HOME}/.gvfs/${path_gvfs}/${path_uri}"
else
$ZENITY --error --title "$z_title" --width "320" \
--text="$z_err_gvfs"
exit 1
fi
else
$ZENITY --error --title "$z_title" --width "320" \
--text="$z_err_uri"
exit 1
fi
if [ -d "${HOME}/.unison" ]; then
# create the Unison user directory if it doesn't exist
mkdir -p "${HOME}/.unison"
fi
# Select a local directory to sync with
local_dir=`$ZENITY --title "$z_title" --file-selection --directory`
# Provide an alias for the sync
mount_name=`echo "$filepath_object" | perl -ne 'print/main on (\w*)\//'`
base_name=`echo "$filepath_object" | perl -ne 'print/.*\/(.*)$/;'`
alias="$mount_name-$base_name"
alias=`$ZENITY --title "$z_title" --entry --text="Enter a name for this Unison preferences file." --entry-text="$alias"`
alias="$alias.prf"
# Write the Unison file
echo "# Unison preferences file" > ${HOME}/.unison/$alias
echo "root = $local_dir" >> ${HOME}/.unison/$alias
echo "root = $filepath_object" >> ${HOME}/.unison/$alias
echo "perms = 0" >> ${HOME}/.unison/$alias
echo "dontchmod = true" >> ${HOME}/.unison/$alias
echo "auto = true" >> ${HOME}/.unison/$alias
exit 0
### EOF
</at></pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-25357810870816942242011-01-16T20:06:00.000-08:002011-01-16T20:06:48.914-08:00Puppet manifest for Centrify Express on UbuntuI've been really pleased with Canonical's new partnership with Centrify, one of the big names in Unix/Linux/Mac Active Directory integration. For the last month, I've started to replace Likewise Open on all of our machines at work. <br />
<br />
Tonight, I took a moment to write a quick Puppet manifest for installing centrifydc and automatically joining the machine to our AD infrastructure. <br />
<br />
<b>Requirements</b><br />
<ul><li>Have an AD user account with privileges to add more than 10 computers to your domain.</li>
<li>Enable the Canonical partner repository (I manage my /etc/apt/sources.list with Puppet)</li>
</ul><div>This script is going to expose a user account password in a text file so make sure you lock it down at same time you delegate the computer object permissions. (If anyone has a better way, I'd appreciate a comment from you.)</div><br />
<pre>class centrify {
package { centrifydc :
ensure => latest ,
notify => Exec["adjoin"]
}
exec { "adjoin" :
path => "/usr/bin:/usr/sbin:/bin",
returns => 15,
command => "adjoin -w -u domainjoiner -p passwordF00 my.company.net",
refreshonly => true,
}
service { centrifydc:
ensure => running
}
}
</pre><br />
The domain join action is only executed when Puppet detects that the package has to be installed or updated. Successful AD joins return a "15" code instead of the normal "0".Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com4tag:blogger.com,1999:blog-3634447693478982417.post-76839201093729158772011-01-12T20:34:00.000-08:002011-09-23T06:18:15.431-07:00Highly Available Zabbix Monitoring Server Using Corosync + Pacemaker + DRBDI recently built a highly available Zabbix monitoring server for a client. It uses the Linux HA tools Corosync and Pacemaker to cluster the services. Linbit's DRBD is used as the cluster storage.<br />
<br />
This configuration uses Ubuntu Server 10.04 LTS (Lucid) for the two cluster nodes Linux distribution. These instructions should work on Ubuntu 10.10 and Debian 6.0 (Squeez) with minor changes. <br />
<br />
<b>Server Network Configuration</b><br />
<pre>virt ip 192.168.0.20
zbx-01 192.168.0.21
zbx-02 192.168.0.22
</pre><br />
I built this configuration on Linux KVM machines using VirtIO disks. These disks show up as /dev/vd* instead of the typical /dev/sd* convention. Make sure you make changes as necessary for your environment. <br />
<br />
Each server has second virtual disk that will be used by DRBD.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Setup DRBD</span><br />
<br />
Begin with DRBD. It works as the block device from which a file system will store MySQL's data files. It is available in official Ubuntu repositories.<br />
<br />
<pre>sudo apt-get install linux-headers-server psmisc drbd8-utils
</pre><br />
Make a DRBD block device configuration file in the /etc/drbd.d/mysql_r0.res. <br />
<br />
<pre>resource mysql_r0 {
syncer {
rate 110M;
}
on zbx-01 {
device /dev/drbd1;
disk /dev/vdb;
address 192.168.0.21:7789;
meta-disk internal;
}
on zbx-02 {
device /dev/drbd1;
disk /dev/vdb;
address 192.168.0.22:7789;
meta-disk internal;
}
}
</pre><br />
Some important things to know:<br />
<br />
<ul><li>The DRBD daemon expects the file to end with ".res"</li>
<li>Make sure to change device and IP address for your environment.</li>
<li>Syncer rate 110M is for 1Gb network connections.</li>
<li>The host names of each machine must match DRBD resource names</li>
</ul><br />
Create the DRBD meta data on the resource device.<br />
<br />
<pre>sudo drbdadm create-md mysql_r0
</pre><br />
Now repeat the previous steps on the second server, zbx-02. <br />
<br />
Start the DRBD service on both servers.<br />
<br />
<pre>/etc/init.d/drbd start
</pre><br />
Use zbx-01 as primary server for start. You'll use it to create filesystem and force the other DRBD server on zbx-02 to sync from it. <br />
<br />
On zbx-01:<br />
<pre>sudo drbdadm -- --overwrite-data-of-peer primary mysql_r0
sudo drbdadm primary mysql_r0
sudo mkfs.ext4 /dev/drbd1
</pre><br />
Depending on the size of your DRBD disk, it may take a minute or so to synchronize the two resources. I like to monitor the progress of this initial sync using the follow command.<br />
<br />
<pre>watch cat /proc/drbd
</pre><br />
Now mount the DRBD resource.<br />
<br />
<pre>sudo mount /dev/drbd1 /mnt
</pre><br />
Remove the DRBD LSB init links since the service start and stop will be controlled by Pacemaker.<br />
<br />
<pre>sudo update-rc.d -f drbd remove
</pre><br />
<span class="Apple-style-span" style="font-size: large;">MySQL Server Installation and Configuration</span><br />
<br />
Install the MySQL Server packages.<br />
<br />
<pre>sudo apt-get install mysql-server
</pre><br />
Stop the MySQL Server daemon.<br />
<br />
<pre>sudo /etc/init.d/mysql stop
</pre><br />
Copy the MySQL data directory to the DRBD supported mount.<br />
<br />
<pre>sudo cp -av /var/lib/mysql/ /mnt/
</pre><br />
Edit the the /etc/mysql/mysql.cnf file. Change the bind address to that of the virtual IP. Set the datadir property to point to the DRBD mount you specified earlier. In this example it is important to note that we are using the /mnt folder for simplicity. You will most likely want to change this to something like /mnt/drbd1 for production use.<br />
<br />
/etc/mysql/my.cnf<br />
<pre>[mysqld]
user = mysql
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /mnt/mysql
tmpdir = /tmp
skip-external-locking
</pre><br />
I like to add the following InnoDB properties to the MySQL my.cnf file. These settings are tuned for a 4 cpu 4G memory machines. MySQL and DRBD pros recommend using InnoDB engine because it has the much better recovery characteristics than old MyISAM. I set my server to default to the InnoDB engine for this reason. <br />
<br />
/etc/mysql/my.cnf<br />
<pre>...
#
# * Make InnoDB the default engine
#
default-storage-engine = innodb
#
# * Innodb Performance Settings
#
innodb_buffer_pool_size = 1600M
innodb_log_file_size = 256M
innodb_log_buffer_size = 4M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_file_per_table
...
</pre><br />
Repeat the previous MySQL /etc/mysql/my.cnf changes on zbx-02.<br />
<br />
You may need to delete the InnoDB data files if you have changed the default settings to the performance ones I used. DO NOT DO THIS ON A SYSTEM IN PRODUCTION!<br />
<br />
<pre>cd /mnt/mysql
sudo rm ib*
</pre><br />
On zbx-01 try starting the MySQL Server.<br />
<br />
<pre>sudo /etc/init.d/mysql start
</pre><br />
Watch the /var/log/mysql/mysql.err for any problems. Logging in with a mysql client is also a good idea. <br />
<br />
Stop MySQL once you've confirmed it's running properly on the DRBD resource.<br />
<br />
Remove the MySQL LSB daemon start links so they do not conflict with Pacemaker.<br />
<br />
<pre>sudo update-rc.d -f mysql remove
</pre><br />
There is also an Upstart script included with the Ubuntu MySQL Server package. You'll need to edit it so that it doesn't try to start the service on boot up.<br />
<br />
Comment out the start, stop and respawn command in /etc/init/mysql.conf. It should look like this example snip-it.<br />
<br />
<pre># MySQL Service
description "MySQL Server"
author "Mario Limonciello <superm1@ubuntu.com>"
#start on (net-device-up
# and local-filesystems
# and runlevel [2345])
#stop on runlevel [016]
#respawn
env HOME=/etc/mysql
umask 007
...
</superm1@ubuntu.com></pre><br />
Repeat this step on zbx-02.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Install and Configure Corosync and Pacemaker</span><br />
<br />
Pacemaker with Corosync is included in the Ubuntu 10.04 LTS repositories. <br />
<br />
<pre>sudo apt-get install pacemaker
</pre><br />
Edit the /etc/default/corosync file using your favorite text editor and enable corosync (START=yes).<br />
<br />
Pacemaker uses encrypted connections between the cluster nodes so you need to generate a corosync authkey file.<br />
<br />
<pre>sudo corosync-keygen
</pre><br />
*Note!* This can take a while if there's no enough entropy. <br />
<br />
Copy the /etc/corosync/authkey to all servers that will form this cluster. Make sure it is owned by root:root and has 400 permissions. <br />
<br />
In /etc/corosync/corosync.conf replace bindnetaddr (by defaults it's 127.0.0.1) with network address of your server, replacing last digit with 0. For example, if your IP is 192.168.0.21, then you would put 192.168.0.0. <br />
<br />
Start the Corosync daemon. <br />
<br />
<pre>sudo /etc/init.d/corosync start
</pre><br />
Now your cluster is configured and ready to monitor, stop and start your services on all your cluster servers. <br />
<br />
You can check the status with the crm status command.<br />
<br />
<pre>crm status
============
Last updated: Wed Sep 15 11:33:09 2010
Stack: openais
Current DC: zbx-01 - partition with quorum
Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ zbx-01 zbx-02 ]
</pre><br />
Now update the Corosync CRM configuration to include DRBD and MySQL.<br />
<br />
<pre>sudo crm configure edit
</pre><br />
Here's a working example but be sure to edit for your environment.<br />
<br />
<pre>node zbx-01 \
attributes standby="off"
node zbx-02 \
attributes standby="off"
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource="mysql_r0" \
op monitor interval="15s"
primitive fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/mysql_r0" directory="/mnt/" fstype="ext4" options="acl"
primitive ip_mysql ocf:heartbeat:IPaddr2 \
params ip="192.168.0.20" nic="eth0"
primitive mysqld lsb:mysql \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="30s"
group zabbix_group fs_mysql ip_mysql mysqld \
meta target-role="Started"
ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Master"
colocation mysql_on_drbd inf: _rsc_set_ zabbix_group ms_drbd_mysql:Master
order mysql_after_drbd inf: _rsc_set_ ms_drbd_mysql:promote zabbix_group:start
property $id="cib-bootstrap-options" \
dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
last-lrm-refresh="1294782404"
</pre><br />
Some notes about this configuration:<br />
<br />
<ul><li>It monitors the DRBD resource every 15s</li>
<li>The takeover IP address is 192.168.0.20</li>
<li>MySQL Server is allowed 2 minutes to startup in case it need to perform recovery operations on the Zabbix database</li>
<li>The STONITH property is disabled since we are only setting up a two node cluster.</li>
</ul><br />
You can check the status of the cluster with the crm_mon utility.<br />
<br />
<pre>sudo crm_mon
</pre><br />
Here's and example of what you want to see:<br />
<br />
<pre>============
Last updated: Wed Mar 11 23:04:49 2011
Stack: openais
Current DC: zbx-01 - partition with quorum
Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ zbx-01 zbx-02 ]
Resource Group: zabbix_group
fs_mysql (ocf::heartbeat:Filesystem): Started zbx-01
ip_mysql (ocf::heartbeat:IPaddr2): Started zbx-01
mysqld (lsb:mysql): Started zbx-01
Master/Slave Set: ms_drbd_mysql
Masters: [ zbx-01 ]
Slaves: [ zbx-02 ]
</pre><br />
<span class="Apple-style-span" style="font-size: large;">Install Zabbix Server</span><br />
<br />
How you install Zabbix is up to you. I like to use recompile the latest upstream Debian packages but using the older Ubuntu Lucid repository version or the official tarball will also work. If you use the apt package remember to not use the dbconfig-common option on zbx-02. You can copy over the configs files from zbx-01.<br />
<br />
<pre>sudo apt-get install zabbix-server-mysql
</pre><br />
Edit the /etc/zabbix/zabbix_server.conf file. Set the SourceIP=192.168.0.20 so that Zabbix will use the virtual "take over" ip address. This will make setting up client configurations and firewall rules much easier.<br />
<br />
Check your newly installed Zabbix server for a clean start.<br />
<br />
<pre>sudo tail /var/log/zabbix-server/zabbix-server.log
</pre><br />
Remove the LSB init script links.<br />
<br />
<pre>sudo update-rc.d -f zabbix-server remove
</pre><br />
Install Apache and Zabbix PHP frontend.<br />
<br />
<pre>sudo apt-get install apache2 php5 php5-mysql php5-ldap php5-gd zabbix-frontend-php
</pre><br />
Remove Apache's auto start links.<br />
<br />
<pre>sudo update-rc.d -f zabbix-server remove
</pre><br />
Repeat on zbx-02.<br />
<br />
Copy the configuration file from zbx-01's /etc/zabbix directory to zbx-02's /etc/zabbix folder.<br />
<br />
<span class="Apple-style-span" style="font-size: large;">Update Corosync Configuration With Zabbix and Apache</span><br />
<br />
<pre>sudo crm configure edit
</pre><br />
Working example:<br />
<pre>node zbx-01 \
attributes standby="off"
node zbx-02 \
attributes standby="off"
primitive apache lsb:apache2
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource="mysql_r0" \
op monitor interval="15s"
primitive fs_mysql ocf:heartbeat:Filesystem \
params device="/dev/drbd/by-res/mysql_r0" directory="/mnt/" fstype="ext4" options="acl"
primitive ip_mysql ocf:heartbeat:IPaddr2 \
params ip="192.168.0.20" nic="eth0"
primitive mysqld lsb:mysql \
op start interval="0" timeout="120s" \
op stop interval="0" timeout="120s" \
op monitor interval="30s"
primitive zabbix lsb:zabbix-server \
op start interval="0" timeout="60" delay="5s" \
op monitor interval="30s"
group zabbix_group fs_mysql ip_mysql mysqld zabbix apache \
meta target-role="Started"
ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Master"
colocation mysql_on_drbd inf: _rsc_set_ zabbix_group ms_drbd_mysql:Master
order mysql_after_drbd inf: _rsc_set_ ms_drbd_mysql:promote zabbix_group:start
property $id="cib-bootstrap-options" \
dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
last-lrm-refresh="1294782404"
</pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com11tag:blogger.com,1999:blog-3634447693478982417.post-34192827234861125352010-08-04T06:20:00.000-07:002010-08-04T06:45:47.820-07:00Ubuntu 10.04 MySQL Server startup bugI encountered an issue yesterday with the mysql-server package on a server when I attempted to use the my.large.cnf settings file in place of the default. <br />
<br />
See Launchpad: <a href="https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/566736">https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/566736</a><br />
<br />
There is a bug with Ubuntu 10.04's MySQL server. If you have it bind to a specific interface, it will fail to start on reboot because it attempts to start after any network interface (such as 127.0.0.1) is initialized. If the interface that MySQL is bound to isn't reinitialized, it will hang. If you try to remove any specific interface bindings from the my.cnf settings, you'll run into another problem port assignment. You need to make sure that the upstart init script matches the <i>bind-address</i> value in the <i>my.cnf</i> file. <br />
<br />
My edits:<br />
<br />
/etc/mysql/my.cnf<br />
<pre># Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
</pre><br />
/etc/init/mysql.conf<br />
<pre>start on (net-device-up IFACE=lo
and local-filesystems
and runlevel [2345])
stop on runlevel [016]
</pre><br />
Note the <b>IFACE=lo</b> addition to the <b>start on</b> line.<br />
<br />
My thanks to cdenley's <a href="http://ubuntuforums.org/showthread.php?t=1479310">post</a> on the Ubuntu forums for shedding light on the problem.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-18274364831798657182010-07-27T13:28:00.000-07:002010-07-27T13:32:31.609-07:00Grub_puts not foundTwo of our Ubuntu 10.4 Lucid workstations ran into Grub2 errors today. Something must have gone wrong with the grub2 apt scripts while they were updating to the latest kernel. Both of the machines with the problem were created from the same Clonezilla image but a few of the other cloned machines weren't affected.<br />
<br />
After running the apt-get dist-upgrade command and rebooting, my users encountered the <i>"fix symbol 'grub_puts' not found"</i> error. <br />
<br />
<b>Instructions</b><br />
<br />
Burn the Ubuntu desktop ISO to CDROM or use the System > Administrator > Startup Disk Creator to create a bootable USB stick.<br />
<br />
Boot from your live disk.<br />
<br />
Open a terminal and get a list of the available partitions.<br />
<br />
<pre>sudo fdisk -l
</pre><br />
You should see results that look something like this:<br />
<br />
<pre>Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e0719
Device Boot Start End Blocks Id System
/dev/sda1 * 1 32 248832 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 32 9730 77899777 5 Extended
/dev/sda5 32 9730 77899776 83 Linux
Disk /dev/sdb: 8053 MB, 8053063680 bytes
255 heads, 63 sectors/track, 979 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00009233
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 255 2048256 b W95 FAT32
/dev/sdb2 256 979 5815530 b W95 FAT32
</pre><br />
In my example above, you can see the system drive is listed as /dev/sda and the bootable USB is /dev/sdb. You may, like me, have a separate /boot partition because you are running encrypted LVM volumes. In that case you need to pay attention to which is your root volume. <br />
<br />
Mount your "root" partition or volume first. Standard Linux partitions are simple.<br />
<br />
<pre>sudo mount /dev/sda1 /mnt
</pre><br />
An encrypted LVM is a little more complicated. The Ubuntu Live CD doesn't have the LVM crypto packages installed so run these commands to get it working.<br />
<br />
<pre>sudo apt-get install lvm2 cryptsetup
</pre><br />
Load the dm-crypt module.<br />
<br />
<pre>sudo modprobe dm-crypte
</pre><br />
Now unlock your encrypted volume. Enter your LUKS passphrase when prompted.<br />
<br />
<pre>sudo cryptsetup luksOpen /dev/sda2 foo
</pre><br />
Load the LVM Kernel module.<br />
<br />
<pre>sudo modprobe dm-mod
</pre><br />
Scan for all of the available volume groups.<br />
<br />
<pre>sudo vgscan
</pre><br />
Active the volume group. <br />
<br />
<pre>sudo vgchange -a
</pre><br />
Now list the logical volumes along with their /dev paths. In the example below, note that my laptop is named "falcon" and yours is most likely something else.<br />
<br />
<pre>sudo lvscan
ACTIVE '/dev/falcon/root' [71.22 GiB] inherit
ACTIVE '/dev/falcon/swap_1' [3.07 GiB] inherit
</pre><br />
Now mount the root volume to /mnt. Replace falcon to match your own results of the previous command.<br />
<br />
<pre>sudo mount /dev/falcon/root /mnt
</pre><br />
Chroot Prep<br />
<br />
Now mount the /dev, /proc, /sys folders for os-prober and grub to work properly in a chrooted jail.<br />
<br />
<pre>sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
</pre><br />
If you had separate /boot partition because of LVM then mount it now. <br />
<br />
<pre>sudo mount /dev/sda1 /mnt/boot
</pre><br />
Now chroot yourself.<br />
<br />
<pre>sudo chroot /mnt
</pre><br />
<b>Repair Grub2</b><br />
<br />
Run the grub-mkconfig command to generate a new grub2 configuration file. This might be what got corrupted and left in this lurch.<br />
<br />
<pre>grub-mkconfig -o /boot/grub/grub.cfg
</pre><br />
Make sure no errors were generated. Then install grub2 in the hard drive MBR.<br />
<br />
<pre>grub-install /dev/sda
</pre><br />
Again make sure didn't get any errors. If you want a warm and fuzzy test your repair with the recheck option.<br />
<br />
<pre>grub-install --recheck /dev/sda
</pre><br />
Exit out of chroot with an exit or Crt+D command.<br />
<br />
Unmount the directories.<br />
<br />
<pre>sudo umount /mnt/dev
sudo umount /mnt
</pre><br />
Now reboot and you should have your system back.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-61632014367572762812010-06-02T08:12:00.000-07:002010-06-02T08:13:10.715-07:00Blurry Linux KVM screen fixQEMU/KVM scales the guest's screen to fit a re-sized window. This is one of the small "paper cuts" that I've been living with since moving to KVM for my virtualization needs.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_kt0AAwdy4jI/TAZwpcybGnI/AAAAAAAAAII/6dia3uMQpFk/s1600/QEMU+%28Windows+XP+Guest%29_002.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/_kt0AAwdy4jI/TAZwpcybGnI/AAAAAAAAAII/6dia3uMQpFk/s320/QEMU+%28Windows+XP+Guest%29_002.png" /></a></div><br />
<br />
Having the screen get resized fine for occasions where you want to lessen the screen real-estate of a VM but still keep an eye want is going on. The problem is once you try drag the window back to it's 1:1 size. With a free hand you won't be able to get the window size exactly back to a 1:1 ratio so the everything in the VM will look slightly blurry.<br />
<br />
Here's an example of my trying to get this 1152 x 864 Windows XP KVM back to actual screen resolution. It's close but still blurry.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_kt0AAwdy4jI/TAZxHLTf_0I/AAAAAAAAAIQ/fNsRX8qsOmc/s1600/Selection_003.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/_kt0AAwdy4jI/TAZxHLTf_0I/AAAAAAAAAIQ/fNsRX8qsOmc/s320/Selection_003.png" /></a></div><br />
I've lived with this blurriness for months now but today I came across Al Dimond's post about the problem. He took the time to investigate the problem and found a quick workaround using <i>xdotool</i> to resize the KVM window to a width and height one pixel less than the guest.<br />
<br />
First get the window ID of your KVM<br />
<pre>xdotool search --title QEMU</pre><br />
Then use the <i>windowsize</i> option to set the window to an exact size. The window ID in my example is 90177539.<br />
<br />
<pre>xdotool windowsize 90177539 1151 863</pre><br />
Following the one pixel less workaround, you would use this command for a 1024x768 guest.<br />
<br />
<pre>xdotool windowsize 90177539 1023 767</pre><br />
Now the guest screen is sharp and crisp again.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/_kt0AAwdy4jI/TAZ0zVdAzLI/AAAAAAAAAIY/NP1NNtLK0eI/s1600/Selection_004.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/_kt0AAwdy4jI/TAZ0zVdAzLI/AAAAAAAAAIY/NP1NNtLK0eI/s320/Selection_004.png" /></a></div>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com4tag:blogger.com,1999:blog-3634447693478982417.post-31224520705791155732010-05-05T11:53:00.000-07:002010-05-05T11:54:52.220-07:00Crank up the throttle on DD transfersThis is a quick one but I want to write it down just because I always seem to forget it if I don't use DD for a while.<br />
<br />
DD's default block size (bs) is 512 bytes!<br />
<br />
That's fine for some small work but you'll be waiting around for hours if your trying to shovel large disk images around.<br />
<br />
Kick DD into high gear by raising the bs value to 32k and be done in minutes.<br />
<br />
Example:<br />
<pre>dd if=/dev/loop0 of=/dev/sdb bs=32k
</pre><br />
In the example above, I've mounted a KVM disk image locally and an iSCSI lun as /dev/sdb. With the default 512b, DD was only able capitalize my 1g network at 5Mb/s. Switching to a 32k block size it utilized 40Mb/s.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-72212006998340954662010-05-03T21:04:00.000-07:002010-07-27T13:39:58.182-07:00Use Clonezilla for physical disk to iSCSI volume transferFor the last few nights, I've been playing around with open-iscsi on Debian, Ubuntu and Windows 2008. Getting things up and running was fairly straight forward thanks to all of the helpful blogs and howtos people have posted. What I found missing was how one moves a Linux installation from a physical or virtual disk to an iSCSI volume. The little I found about the subject involved physically mounting the source disk to the iSCSI host or performing some tricky PXE boot magic to run the Linux distribution's installer. I find both of these methods inelegant and limited. <br />
<br />
Tonight I came at it again. This time with my favorite FOSS disk imaging tool, Clonezilla!. The wonderful team behind it didn't skimp out and included the open-iscsi packages. <br />
<br />
<h2>Instructions</h2><br />
Download and burn a copy of the latest Ubuntu version of Clonezilla.<br />
<br />
Boot from the Clonezilla live CDROM. Select all of the regional configuration options you require.<br />
<br />
Stop when you get the the ncurses prompt to begin using Clonezilla or use the console. Press <alt>+F2 to switch the second tty console. This will let you work with the tools and setup a connection to your iSCSI share.<br />
<br />
Get some networking configured otherwise you aren't going to be able to connect to the LUN.<br />
<pre>sudo dhclient eth0
</pre><br />
Now edit the iscsid.conf file.<br />
<pre>sudo vi /etc/iscsi/iscsid.conf
</pre><br />
Look for the node.startup property and set it to automatic.<br />
<br />
Now start the open-iscsi daemon.<br />
<pre>sudo /etc/init.d/open-iscsi start
</pre><br />
Use the following command to query the iSCSI target for LUNs.<br />
<pre>iscsiadm -m discovery -t sendtargets -p IP_OF_YOUR_TARGET
</pre><br />
Here's an example of what mine looked like:<br />
<pre>user@karmic:~$ sudo iscsiadm -m discovery -t st -p localhost
192.168.50.10:3260,1 iqn.2007-10.local.server-1:storage.lun0
</pre><br />
Now I can connect using the following:<br />
<pre>iscsiadm -m node -T iqn.2007-10.local.server-1:storage.lun0 -p 192.168.50.10:3260 -l
</pre><br />
Now check the /var/log/messages for the newly created virtual SCSI device.<br />
<pre>tail /var/log/messages
</pre><br />
Now you can switch back to console #1 and continue with Clonezilla wizard. Select local disk to local disk when prompted for which mode to use.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com9tag:blogger.com,1999:blog-3634447693478982417.post-56358766271440834082010-05-03T10:38:00.000-07:002010-05-03T20:20:10.091-07:00Create Cisco VPN on Ubuntu Karmic/LucidIt is very easy to setup a Cisco VPN on Ubuntu. I used the following instructions to get my corporate tunnels running. This tutorial assumes you have already acquired a .pcf file from your network IT staff.<br />
<br />
<h2>Instructions</h2>Install the <i>vpnc</i> package and any required dependencies:<br />
<pre>sudo apt-get install vpnc
</pre><br />
Open your vpn pcf configuration file with your favorite text editor.<br />
<pre>vim corporatenet.pcf
</pre><br />
It will looking something like this:<br />
<pre>[main]
Description=
Host=vpn.corpnet.com
AuthType=1
GroupName=CorpNet
GroupPwd=enc_GroupPwd=C555E3A4BE82FF0001601A38260A92D93FF5693A482367E117EF8697CBED681C5FDD7F2AE0DEEA4B37DBBB21434189A46D8955F11916040A
EnableISPConnect=0
ISPConnectType=0
ISPConnect=
ISPPhonebook=
ISPCommand=
Username=
SaveUserPassword=0
UserPassword=
enc_UserPassword=
NTDomain=
EnableBackup=0
BackupServer=
EnableMSLogon=1
MSLogonType=0
EnableNat=1
TunnelingMode=0
TcpTunnelingPort=10000
CertStore=0
CertName=
CertPath=
CertSubjectName=
CertSerialHash=00000000000000000000000000000000
SendCertChain=0
PeerTimeout=90
EnableLocalLAN=0
</pre><br />
Note the values for Host, GroupName and enc_GroupPwd. You'll need these to create your vpnc configuration file.<br />
<br />
<pre>sudo vim /etc/vpnc/corpnet.conf
</pre><br />
Make your configuration file look like this. Just make sure to change the fictional CorpNet values with your own.<br />
<br />
<pre>IPSec gateway vpn.corpnet.com
IPSec ID CorpNet
IPSec obfuscated secret C555E3A4BE82FF0001601A38260A92D93FF5693A482367E117EF8697CBED681C5FDD7F2AE0DEEA4B37DBBB21434189A46D8955F11916040A
Xauth username YOURUSERNAME
Xauth password YOURPASSWORD
</pre><br />
It's important to note the <b>obfuscated</b> option in the group password. Most of the examples and howtos I've seen on the Net leave this out because they were written several years ago before VPNC supported Cisco encrypted passwords. The older guides required you to de-crypt the Cisco string. This isn't necessary anymore with Karmic and Lucid releases.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-39172414117746988502010-04-29T11:28:00.000-07:002010-04-29T11:38:55.277-07:00Run Windows virtual machines on Ubuntu/Debian desktop with KVMBoth at home and at work, I use Ubuntu as my operating system. There are times when I'm forced use Windows for some reason and there are several solutions for host Windows OS virtual machines on an Ubuntu laptop. Several years ago, I used what I most understood, VMware's workstation offering for Linux. Later when Virtualbox-ose (open source edition) caught up with VMware's features and hosted from Ubuntu's repositories, I switched to it.<br />
<br />
These days, I'm much more technically adept with FOSS virtualization technologies and made the switch to using Linux KVM on my newer machines which support Intel's VT and AMD's AMD-V acceleration. I don't have any Phoronix style detailed comparisons but KVM feels faster and lighter than Virtualbox or VMware.<br />
<h2>Quick Setup</h2>Install the qemu-kvm package<br />
<pre>sudo apt-get install qemu-kvm
</pre><br />
Create a directory to hold your virtual machines.<br />
<pre>mkdir -p ~/VM/WinXP
</pre><br />
Move to that directory and create a disk image file.<br />
<pre>cd ~/VM/WinXP
qemu-img create -f raw windows_xp.img 12G
</pre><br />
Options: <br />
-f raw = creates raw IO driver format image (You could also use the qcow2 mode. It has more features but doesn't perform as fast as raw)<br />
windows_xp.img = name of the image file<br />
12G = The virtual disk size.<br />
<br />
Now create a bash script using your favourite text editor. I like vim but you could just as easily use gedit from GNOME.<br />
<pre>vim Windows_XP.sh
</pre><br />
Here's how my script looks:<br />
<pre>#!/bin/bash
#
# Description: Launches Windows XP QEMU64
#
# Verion: 1
# Author: Clayton Kramer clayton.kramer @ gmail.com
# Modified: Fri 23 Apr 2010 11:43:35 AM EDT
#
# Ubuntu Karmic tweek - Prepare audio to use Pulse driver instead of ALSA
export QEMU_AUDIO_DRV=pa
# Launch Windows XP KVM
kvm \
-name "Windows XP Guest" \
-m 1024 \
-smp 1 \
-localtime \
-drive file=~/VM/WinXP/windows_xp.img,if=virtio,index=0,boot=on,cache=writeback \
-drive file=~/ISO/windows_xp_sp2.iso,if=ide,media=cdrom,index=2 \
-fda ~/ISO/viostor-31-03-2010-floppy.img \
-net nic,model=virtio \
-net user \
-soundhw ac97 \
-usb \
-usbdevice tablet
</pre><br />
By default Ubuntu 9.10's qemu-kvm will use ALSA drivers which can lead to some choppy sound. You can change this behavior by setting the QEMU_AUDIO_DRV environmental variable to pa before launching the KVM.<br />
<br />
I am using the VirtIO drivers in the script above. They improve the IO performance for Windows guests. Haydn Solomon provides some detailed instructions on setting them up in his KVM blog. I've decided to live a little dangerous and enabled the writeback option for the block driver. <br />
<br />
<a href="http://www.linux-kvm.com/content/block-driver-updates-install-drivers-during-windows-installation">http://www.linux-kvm.com/content/block-driver-updates-install-drivers-during-windows-installation</a><br />
<br />
After the Windows installation is complete you can ommit the virtual floppy disk device line. <br />
<br />
You may also want to take note that my script also configures the paravirtualized network device. You'll need to get the latest driver for that from:<br />
<br />
<a href="http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers">http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers</a><br />
<br />
If you wanted wanted to get a Windows XP install going without using the VirtIO drivers you can use this compatibility script. It uses IDE for the IO controller bus and Intel e1000 driver for the NIC.<br />
<br />
<pre># Launch Windows XP KVM (compatibility)
kvm \
-name "Windows XP Guest" \
-m 1024 \
-smp 1 \
-localtime \
-drive file=~/VM/WinXP/windows_xp.img,if=ide,index=0,boot=on \
-drive file=~/ISO/windows_xp_sp2.iso,if=ide,media=cdrom,index=2 \
-net nic,model=e1000 \
-net user \
-soundhw ac97 \
-usb \
-usbdevice tablet
</pre>Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-15945136165534047832010-04-26T19:55:00.000-07:002010-04-26T19:55:30.628-07:00How do you manage multiple Ubuntu desktops?I spent a large part of my day at work trying to figure out how to replicate Windows style login scripts for our office Ubuntu desktops. This seemed like a straight forward problem that some amount of community effort would have already solved. It was so easy to setup Active Directory (AD) integration with Likewise-open so where is the howto on setting up automatic drive mapping on a Gnome desktop?<br />
<br />
There are mass management tools like CFGengine and Puppet for server farms but where are the tools for running an office on Linux desktops? There are some proprietary offerings from Likewise (the contributers of Likewise-open) and Centrify that provide tools for integrating AD group policy objects (GPO) but they are geared toward fortune 2000 size companies. I'm looking for something beyond just being able to authenticate with a AD and connect to a CIFS share. If Linux and especially Ubuntu are ever going to really crack the desktop market, someone needs to launch a project to bridge this small enterprise gap.Ninjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com0tag:blogger.com,1999:blog-3634447693478982417.post-65782358988306545532010-04-24T06:17:00.000-07:002010-04-27T10:12:42.989-07:00Integrate Ubuntu 9.10 Karmic's Samba With Microsoft Active DirectoryLast week, I needed to setup a file server at work. Most of our back office server run Ubuntu except a few Microsoft Active Directory (AD) servers which control our workstations and user accounts. We make sure all of our Linux hosted services integrate with AD via Kerberos and LDAP.<br />
<br />
Samba has been able to integrate with AD via winbind for a few years now. There are numerous postings on the net about how to do this. All of them a just a little different and many are a just a touch out of date for various distributions. Here's what I used to get an Ubuntu 9.10 server connected with our Microsoft Windows domain.<br />
<br />
<h2>ACL Instructions</h2>The normal install of Ubuntu and Debian support the standard Linux POSIX file system permissions. Access Control Lists (ACLs) provide a much more flexible way of specifying permissions on a file or other object than the standard Unix user/group/owner system. A good example you might deal with in production is the need to have "Domain Admins" and "HR" groups have write permission on a folder but the "Domain Users" only should have read access. That's not easy to do with standard POSIX.<br />
<br />
Install the acl package:<br />
<pre>sudo apt-get install acl</pre><br />
Now edit the partition that will hold your Samba shares so that it mounts with acl enabled. I typically create my shares in the /home/shares folder with my /home being mounted on its own volume.<br />
<pre>vim /etc/fstab</pre><br />
Example:<br />
<pre>/dev/mapper/vg0-home /home ext4 acl,defaults 0 2</pre><br />
Please be careful when editing your fstab file. It's a good idea to make a backup of it first especially if you are making changes to the / "root" mount.<br />
<br />
Some recommend a reboot at this point but you don't have to if you execute the following remount command.<br />
<pre>mount -o remount,rw /dev/mapper/vg0-home</pre><br />
<h2>Kerberos Instructions</h2>Install the kerberos packages:<br />
<pre>sudo apt-get install ntp krb5-config krb5-user</pre><br />
The package installer will prompt you for Kerberos server information. Don't worry about those just enter something to satisfy it. You are going to replace the cumbersome default krb5.conf with a specific one for Active Directory authentication.<br />
<br />
If you already have samba and winbind daemons installed an running, stop them now.<br />
<pre>sudo service samba stop
sudo service winbind stop
</pre><br />
Now let's setup the Kerberos configuration for authentication with Active Directory.<br />
<pre>sudo mv /etc/krb5.conf /etc/krb5.orig
sudo vim /etc/krb5.conf
</pre><br />
Copy the following text. Make sure to change <i>SCHOOL.UNIVERSITY.EDU</i> to your domain. Keep it in CAPS, though.<br />
<pre>## /etc/krb5.conf
[logging]
default = FILE:/var/log/krb5.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
[libdefaults]
default_realm = SCHOOL.UNIVERSITY.EDU
dns_lookup_realm = false
dns_lookup_kdc = false
clock_skew = 300
ticket_lifetime = 24h
forwardable = yes
[realms]
SCHOOL.UNIVERSITY.EDU = {
kdc = AD-CONTROLER1.SCHOOL.UNIVERSITY.EDU
kdc = AD-CONTROLER2.SCHOOL.UNIVERSITY.EDU
admin_server = AD-CONTROLER1.SCHOOL.UNIVERSITY.EDU
default_domain = SCHOOL.UNIVERSITY.EDU
}
[domain_realm]
.school.university.edu = SCHOOL.UNIVERSITY.EDU
.SCHOOL.UNIVERSITY.EDU = SCHOOL.UNIVERSITY.EDU
</pre><br />
<b>Note:</b> In my example above, I've listed a secondary Kerberos server for authentication should the first domain controller be unavailable. You can add as many secondary kdc as you want. Remove this line if you only have one AD server.<br />
<br />
Test to make sure the Kerberos connection before proceeding. This can save you some troubleshooting headaches later on.<br />
<pre>kinit Administrator@SCHOOL.UNIVERSITY.EDU</pre><br />
The command should return clean and using klist should report a valid ticket good for 24 hours.<br />
<pre>klist</pre><br />
Now you can setup the Samba configuration.<br />
<h2>Samba smb.conf for Active Directory</h2>The default Samba config file is verbose with comments and easy to make a mistake. It is better just to make a backup copy of it and create a clean configuration.<br />
<br />
<pre>sudo mv /etc/samba/smb.conf /etc/samba/smb.orig
sudo /etc/samba/smb.conf
</pre><br />
There are almost countless examples on the net about how to configure you Samba file. Everyone's got a slightly different setup. I've settled on the following for production use.<br />
<br />
<b>Note:</b> You will want to make sure to change <i>SCHOOL.UNIVERSITY.EDU</i> with your domain name.<br />
<pre>[global]
dos charset = UTF8
display charset = UTF8
workgroup = SCHOOL
realm = SCHOOL.UNIVERSITY.EDU
server string = %h
security = ADS
map to guest = Bad User
null passwords = Yes
obey pam restrictions = Yes
pam password change = Yes
password server = AD-CONTROLER1.SCHOOL.UNIVERSITY.EDU
username map = /etc/samba/smbusers
max log size = 10
log file = /var/log/samba/log.%m
unix extensions = No
deadtime = 10
socket options = TCP_NODELAY SO_KEEPALIVE SO_SNDBUF=65536 SO_RCVBUF=65536
load printers = No
disable spoolss = Yes
dns proxy = No
idmap uid = 10000-20000
idmap gid = 10000-20000
template shell = /bin/bash
winbind separator = +
winbind cache time = 3600
winbind enum users = Yes
winbind enum groups = Yes
winbind refresh ticket = Yes
create mask = 0777
directory mask = 0777
use sendfile = Yes
delete veto files = Yes
veto files = /.AppleDB/.AppleDouble/.AppleDesktop/:2eDS_Store/Network Trash Folder/Temporary It
map hidden = Yes
map system = Yes
[HR]
comment = School HR Server Share
path = /home/shares/HR
read only = No
create mask = 0775
valid users = @"HR-Dept"
</pre><br />
Some key notes about this configuration:<br />
<ul><li><b>socket options</b> - The TCP_NODELAY makes noticeable improvement on file transfer speeds especially if you are using 1G NICs.</li>
<li><b>obey pam restrictions</b> - This integrates your PAM authentication system.</li>
<li><b>veto files</b> - Got Macs on your network? Keep those pesky .Apple file droppings off of your file server</li>
<li><b>valid users</b> - Only members of the HR-Dept user group will have access to the HR file share.</li>
</ul><div>Restart the Samba and Winbind services.</div><pre>sudo /etc/init.d/winbind stop
sudo /etc/init.d/samba restart
sudo /etc/init.d/winbind start
</pre><br />
Now you can join the Samba server to the Active Directory Domain.<br />
<pre>sudo net ads join -U Administrator
</pre><br />
You should see a message that the target domain was joined successfully.<br />
<h3>Testing & Troubleshooting</h3>Check you domain membership with the wbinfo -t command. This will validate that workstation trust account is working correctly:<br />
<pre>sudo wbinfo -t</pre><br />
You should see your domain users with this command:<br />
<pre>sudo wbinfo -u</pre><br />
The -g option should list your domain's groups.<br />
<pre>sudo wbinfo -g</pre><br />
<br />
<h2>Configure System Security</h2>Now modify the /etc/nsswitch.conf file so the system can start recognising your domain accounts.<br />
<br />
<pre>vim /etc/nsswitch.conf</pre><br />
Append winbind after compat for passwd and group. Leave everything else alone.<br />
<pre>passwd: compat winbind
group: compat winbind
shadow: compat
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
</pre><br />
Edit these PAM config files.<br />
<br />
<pre>sudo mv /etc/pam.d/common-account /etc/pam.d/common-account.orig
sudo vim /etc/pam.d/common-account
</pre><br />
Copy the following.<br />
<pre>account sufficient pam_winbind.so
account required pam_unix.so
</pre><br />
Now edit the common-auth file.<br />
<pre>sudo cp /etc/pam.d/common-auth /etc/pam.d/common-auth.orig
sudo vim /etc/pam.d/common-auth
</pre><br />
Now create a common-auth that looks like this<br />
<pre>#
# /etc/pam.d/common-auth - authentication settings common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
auth [success=3 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE
auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000
auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass
# here's the fallback if no module succeeds
auth requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth required pam_permit.so
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config
</pre><br />
Setup the pam common-session file so new users to the system get created with a standard skel profile and home directory<br />
<br />
<pre>sudo cp /etc/pam.d/common-session /etc/pam.d/common-session.orig
sudo vim /etc/pam.d/common-session
</pre><br />
Your common-session should look like this:<br />
<pre>#
# /etc/pam.d/common-session - session-related modules common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of modules that define tasks to be performed
# at the start and end of sessions of *any* kind (both interactive and
# non-interactive).
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules. See
# pam-auth-update(8) for details.
# here are the per-package modules (the "Primary" block)
session [default=1] pam_permit.so
# here's the fallback if no module succeeds
session requisite pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
session required pam_permit.so
# and here are more per-package modules (the "Additional" block)
session optional pam_krb5.so minimum_uid=1000
session required pam_unix.so
session required pam_mkhomedir.so umask=0022 skel=/etc/skel
# end of pam-auth-update config
</pre><br />
<h3>Sudo'ers file</h3>You can grant domain administrators elevated sudo permissions on the server by adding this line to your sudo configuration.<br />
<br />
Open the sudo safe editor<br />
<pre>sudo visudo
</pre><br />
Add the following to the configuration:<br />
<pre># Allow "Domain Admins" from the domain "DOMAIN" to run all commands
%SCHOOL+Domain\ Admins ALL=(ALL) ALL
</pre><br />
You will want to replace <i>SCHOOL</i> with your domain name.<br />
<br />
<br />
<hr /><br />
<h2>References</h2><a href="http://aisalen.wordpress.com/2007/08/10/acls-on-samba/">ACLs on Samba</a> by Dustin Puryear<br />
<a href="http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3487081/Join-Samba-3-to-Your--Active-Directory-Domain.htm">Join Samba 3 to Your Active Directory Domain</a> - Carla Schroder<br />
<a href="https://help.ubuntu.com/community/ActiveDirectoryWinbindHowto">Active Directory Winbind Howto</a> - Ubuntu Community DocumentationNinjixhttp://www.blogger.com/profile/13199880430430223226noreply@blogger.com2