Thursday, February 24, 2011

Force Replication Between All Active Directory Servers

Occasionally, I have to troubleshoot Active Directory issues between branch offices and I can never remember all of the resync arguments for the repadmin.exe command. So I'm posting it here.

repadmin /syncall /A /e /P

This will force the executing DC to sync with all NC's known to it.

You should see something like this * number of NC in your domain:

Syncing all NC's held on ATLAS.
Syncing partition: DC=ForestDnsZones,DC=my,DC=corp,DC=com
CALLBACK MESSAGE: The following replication is in progress:
    From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
    To  : cee785b6-01fe-490c-8e50-5199841a1b58._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication is in progress:
    From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
    To  : 62aa2e39-9c52-4eef-a789-f201350c0b02._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication completed successfully:
    From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
    To  : cee785b6-01fe-490c-8e50-5199841a1b58._msdcs.my.corp.com
CALLBACK MESSAGE: The following replication completed successfully:
    From: c2fa9a13-bc15-419c-b416-21e6da3d5760._msdcs.my.corp.com
    To  : 62aa2e39-9c52-4eef-a789-f201350c0b02._msdcs.my.corp.com
CALLBACK MESSAGE: SyncAll Finished.
SyncAll terminated with no errors.

How to automatically connect Windows 7 or 2008 R2 VPN on start up

Do you have a Windows 7 or 2008 R2 machine that needs to automatically connect to a VPN? Here are some instructions on configuring the Task Scheduler to do this for you.

My thanks to RpCahoon for providing his helpful post on Microsoft's Social Answers site. I'm also giving Microsoft a nod for doing such thorough job with the modern Task Scheduler.

Instructions
  1. Open Task Scheduler
    Start > All Programs > Accessories > System Tools > Task Scheduler
  2. Click "Create Task" in the Actions pane on the right
  3. General Tab
    1. Provide a logical name for the task like "Auto VPN"
    2. Switch the running task mode to Run whether user is logged on or not
    3. Enable the Run with highest privileges option
    4. Change the "Configure for:" drop-down to Windows 7, Windows Server 2008 R2
  4. Triggers Tab
    1. Click the "New..." button
    2. Change "Begin the task:" to At start up
    3. (Optional) Enable "Delay task for" and set to 5 minutes. This give the machine a chance to idle down before launching the VPN.
  5. Actions Tab
    1. Click the "New..." button
    2. Enter c:\windows\system32\rasdial.exe in the "Program/script:" field. You can also browse to it if you don't want to type it or your default Windows install directory is different.
    3. Type the connection name in the "Add arguments" field. The rasdial.exe requires you wrap the connection name in quotes if it has spaces. You may also need to append the connection's username and password if they are required.
  6. Conditions Tab
    1. Un-check all of the options on the conditions tab.
  7. Settings Tab
    1. (Optional) enable "If the task fails, restart every:" and set to an appropriate value. I set mine to 1 hour in case there is a problem on the VPN server's end. 
    2. (Optional) set the "Attempt to restart up to:" value to an acceptable number. My default is 72 times at a 1 hour interval. This covers long weekend.
  8. Save the new task

Friday, February 18, 2011

Get those FSYNC numbers up on your ZFS pool

For the last week, I've been trying to figure out why our 10 drive ZFS zpool has been delivering such lousy NFS performance Proxmox KVM cluster.

Here's what pveperf was returning:
pveperf /mnt/pve/kvm-images/
CPU BOGOMIPS:      76608.87
REGEX/SECOND:      896132
HD SIZE:           7977.14 GB (xxx.xxx.xxx.xxx:/volumes/vol0/kvm-images)
FSYNCS/SECOND:     23.15
DNS EXT:           58.84 ms
DNS INT:           1.50 ms (my.company.com)

The zpool looked like this:
zpool status vol0
  pool: vol0
 state: ONLINE
 scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        vol0                       ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t5000C50010377B5Bd0  ONLINE       0     0     0
            c0t5000C5001037C317d0  ONLINE       0     0     0
            c0t5000C5001037EED7d0  ONLINE       0     0     0
            c0t5000C50010381737d0  ONLINE       0     0     0
            c0t5000C50010381BBBd0  ONLINE       0     0     0
            c0t5000C50010382777d0  ONLINE       0     0     0
            c0t5000C5001038291Fd0  ONLINE       0     0     0
            c0t5000C500103870A3d0  ONLINE       0     0     0
            c0t5000C500103871C3d0  ONLINE       0     0     0
            c0t5000C500103924E3d0  ONLINE       0     0     0
            c0t5000C500103941F7d0  ONLINE       0     0     0
        cache
          c0t50015179591D9AEFd0    ONLINE       0     0     0
          c0t50015179591DACA1d0    ONLINE       0     0     0
          c1t2d0                   ONLINE       0     0     0
        spares
          c0t5000C50010395057d0    AVAIL   

errors: No known data errors

Raw write speed wasn't a problem. Tests of copying DVD iso files were supper fast over the 10G network backbone. But the performance of creating new files and folders really hurt. This was very apparent when I started using bonnie++ on the NFS shares from the Proxmox nodes. Bonnie++ zipped along until it started its "Create files..." tests. The Linux client would practically lock up.

So a little Goolge ZFS keyword searching later, I came across Joe Little's blog post, ZFS Log Devices: A Review of the DDRdrive X1. This got me thinking about my zpool setup. Looking at the configuration again, I realized that I'd made a mistake and added the second Intel X25M SSD to the cache pool instead of the log pool. :)

Thanks to ZFS awesomeness it was real easy to pull the SSD out of the cache and designate it as part of the log pool. No down time for the production system and no wasted weird weekend hours staring at glowing terminal console.

Oh man, did that make a difference in performance.

Here's what the reconfigured vol0 zpool looks like:
zpool status vol0
  pool: vol0
 state: ONLINE
 scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        vol0                       ONLINE       0     0     0
          raidz2-0                 ONLINE       0     0     0
            c0t5000C50010377B5Bd0  ONLINE       0     0     0
            c0t5000C5001037C317d0  ONLINE       0     0     0
            c0t5000C5001037EED7d0  ONLINE       0     0     0
            c0t5000C50010381737d0  ONLINE       0     0     0
            c0t5000C50010381BBBd0  ONLINE       0     0     0
            c0t5000C50010382777d0  ONLINE       0     0     0
            c0t5000C5001038291Fd0  ONLINE       0     0     0
            c0t5000C500103870A3d0  ONLINE       0     0     0
            c0t5000C500103871C3d0  ONLINE       0     0     0
            c0t5000C500103924E3d0  ONLINE       0     0     0
            c0t5000C500103941F7d0  ONLINE       0     0     0
        logs
          c1t2d0                   ONLINE       0     0     0
        cache
          c0t50015179591D9AEFd0    ONLINE       0     0     0
          c0t50015179591DACA1d0    ONLINE       0     0     0
        spares
          c0t5000C50010395057d0    AVAIL   

errors: No known data errors

Now ZFS can properly feed all of the Linux FSYNC disk requests. Check out the Proxmox performance test improvements.

pveperf /mnt/pve/kvm-images/
CPU BOGOMIPS:      76608.87
REGEX/SECOND:      896132
HD SIZE:           7977.14 GB (xxx.xxx.xxx.xxx:/volumes/vol0/kvm-images)
FSYNCS/SECOND:     1403.21
DNS EXT:           58.84 ms
DNS INT:           1.50 ms (my.company.com)

Get your CentOS 5.5 mouse to behave as Linux KVM guest

Just spent 30 minutes trying to figure out why CentOS 5.5 wasn't playing nice with QEMU/KVM's USB tablet emulator.

All you need to do is edit the xorg.conf old school style. My thanks to dyasny for posting his xorg.conf code snipit.

Here's a copy of my working configuration.
# Xorg configuration created by pyxf86config

Section "ServerLayout"
        Identifier     "Default Layout"
        Screen      0  "Screen0" 0 0
        InputDevice    "Keyboard0" "CoreKeyboard"
        InputDevice "Tablet" "SendCoreEvents"
        InputDevice "Mouse" "CorePointer"
EndSection

Section "InputDevice"
        Identifier  "Keyboard0"
        Driver      "kbd"
        Option      "XkbModel" "pc105"
        Option      "XkbLayout" "us"
EndSection

Section "InputDevice"
        Identifier "Mouse"
        Driver "void"
        #Option "Device" "/dev/input/mice"
        #Option "Emulate3Buttons" "yes"
EndSection

Section "InputDevice"
        Identifier "Tablet"
        Driver "evdev"
        Option "Device" "/dev/input/event2"
        Option "CorePointer" "true"
        Option "Name" "Adomax QEMU USB Tablet"
EndSection

Section "Device"
        Identifier  "Videocard0"
        Driver      "cirrus"
EndSection

Section "Screen"
        Identifier "Screen0"
        Device     "Videocard0"
        DefaultDepth     24
        SubSection "Display"
                Viewport   0 0
                Depth     24
        EndSubSection
EndSection

Hope this helps.

Tuesday, February 1, 2011

Use Clonezilla to image KVM VirtIO disks

I'm always seeking to squeeze more speed out of common administrator tasks like disk imaging and P2V conversions. Today I tried using my favorite FOSS cloning software, Clonezilla, to restore an image to a KVM running VirtIO disks. What I found was that the current stable release (20110113-maverick) doesn't recognize VirtIO's /dev/vh[a,b,c...] disk naming syntax. You get used to this working with KVM and I'm still on the fence about VirtIO's name convention verses the more common /dev/sd[a,b,c...] method.

Luckily, another Clonezilla user already submitted a patch for VirtIO drives back in December. It should make it into a future stable release in a few months.

http://sourceforge.net/tracker/index.php?func=detail&aid=3112544&group_id=115473&atid=671650

I was in rush to get a P2V complete, so I used a quick sed onliner to modify stable Clonezilla's Perl scripts to recognize the /dev/vda disk. You'll need to drop into the shell mode to execute this.

sudo sed -i '/\[hs\]/\[vhs\]/' /opt/drbl/sbin/ocs-*

Keep in mind that these changes will be lost if your booting from a Live CDROM.

Using the VirtIO disk drivers improve the disk imaging throughput for my machine by about 15 percent. Also, don't forget to preload the VirtIO drivers on a Windows machine before imaging and restoring. Otherwise you'll get BSOD on boot.