Tuesday, February 13, 2018

Resize encrypted CentOS7 PV volume


If you ticked the checkbox "encrypt volume" during CentOS7 installation, the Disk is encrypted on the LVM PV (Physical Volume) level by CentOS using LUKS.
Therefore you need to perform this steps to grow one of the LVM partitions:
  • Increase the Partition of the disk
  • Increase the PV (physical LVM Volume)
  • Increase the LV (logical LVM Volume) 
  • Increase the file system of the partition holding your data.
Steps (example using VMWare ESXi VM):
  1. Increase the disk size in vCenter / ESXi for the VM (e.g from 750 GB to 800 GB). This can be done while the VM machine is up.
  2. Create a snapshot of the VM
  3. Reboot the VM
  4. Resize the disk partition
    fdisk /dev/sda
    
    • Print the partition table:
      Command (m for help): p
      
      Disk /dev/sda: 859.0 GB, 858993459200 bytes, 1677721600 sectors <--size in GiB!
      
      Device  boot.     Start        End     Blocks   Id  System
      /dev/sda1   *        2048     2099199     1048576   83  Linux
      /dev/sda2         2099200  1572863999   785382400   83  Linux <-- Holds the LVM vols.
      
    • Now delete the partition (not kidding):
      Command (m for help): d
      Partition number (1,2, default 2): 2
      
    • Re-create the partion with larger size:
      Command (m for help): n
       Partition type:
         p   primary (1 primary, 0 extended, 3 free)
         e   extended
      Select (default p): p
      Partition number (2-4, default 2): 2
      First sector (2099200-1677721599, default 2099200): [ENTER]
      Using default value 2099200
      Last sector, +sectors or +size{K,M,G} (2099200-1677721599, default 1677721599): [ENTER]
      Using default value 1677721599
      Partition 2 of type Linux and of size 799 GiB is set
      
    • Check the partition table:
      Command (m for help): p
      
      Disk /dev/sda: 859.0 GB, 858993459200 bytes, 1677721600 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk label type: dos
      Disk identifier: 0x000f06d8
      
         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *        2048     2099199     1048576   83  Linux
      /dev/sda2         2099200  1677721599   837811200   83  Linux   <-- New size
      
    • Write the partition table and exit:
      Command (m for help): w
      
  5. reboot
  6. Resize the PV (LVM Physical Volume):
    • Display the PV volumes:
      [root@localhost]# pvdisplay
        --- Physical volume ---
        PV Name               /dev/mapper/luks-999a99b9-8a99-9abc-d999-b99bb9999bb9
        VG Name               cl
        PV Size               <749,00 GiB / not usable 0
        Allocatable           yes
        PE Size               4.00 MiB
        Total PE              191743
        Free PE               1
        Allocated PE          191742
        PV UUID               Zu21Ve-7mx5-v4p2-bxfa-ZH2N-EbWE-WeMk3T
      
    • Resize the PV (to the maximum available)
      [root@localhost]# pvresize /dev/mapper/luks-999a99b9-8a99-9abc-d999-b99bb9999bb9
      Physical volume "/dev/mapper/luks-999a99b9-8a99-9abc-d999-b99bb9999bb9" changed
      1 physical volume(s) resized / 0 physical volume(s) not resized
      
    • check the PV:
      [root@localhost]# pvdisplay
        --- Physical volume ---
        PV Name               /dev/mapper/luks-999a99b9-8a99-9abc-d999-b99bb9999bb9
        VG Name               cl
        PV Size               <799,00 GiB / not usable 1,00 MiB
        Allocatable           yes
        PE Size               4,00 MiB
        Total PE              204543
        Free PE               12801
        Allocated PE          191742
        PV UUID               Zu21Ve-7mx5-v4p2-bxfa-ZH2N-EbWE-WeMk3T
      
  7. Resize LV (logical volume):
    • Display the Logical volume(s):
      [root@localhost]# lvdisplay
       ...
       --- Logical volume ---
        LV Path                /dev/cl/DATA
        LV Name                DATA2
        VG Name                cl
        LV UUID                X8AAA-4aAa-Aaaa-8A8A-BbbB-bb8b-bbB8BB
        LV Write Access        read/write
        LV Creation host, time localhost.localdomain, 2018-02-02 09:50:26 +0400
        LV Status              available
        # open                 1
        LV Size                <411,12 GiB
        Current LE             105246
        Segments               1
        Allocation             inherit
        Read ahead sectors     auto
        - currently set to     8192
        Block device           253:8
      
    • Resize the LV:
      [root@localhost]# lvresize --size +50G /dev/cl/DATA
      Size of logical volume cl/DATA changed from <411,12 GiB (105246 extents) to <461,12 GiB (118046 extents).
      Logical volume cl/DATA successfully resized.
      
    • Check the LV partition:
      [root@localhost]# lvdisplay
        ...
        --- Logical volume ---
        LV Path                /dev/cl/DATA
        LV Name                DATA2
        VG Name                cl
        LV UUID                X8AAA-4aAa-Aaaa-8A8A-BbbB-bb8b-bbB8BB
        LV Write Access        read/write
        LV Creation host, time localhost.localdomain, 2018-02-02 09:50:26 +0400
        LV Status              available
        # open                 1
        LV Size                <461,12 GiB
        Current LE             118046
        Segments               1
        Allocation             inherit
        Read ahead sectors     auto
        - currently set to     8192
        Block device           253:8
      
  8. Resize file system (can be done on-the-fly without unmount if it is XFS):
    [root@localhost]# xfs_growfs /dev/cl/DATA
    meta-data=/dev/mapper/cl-DATA2   isize=512    agcount=4, agsize=26942976 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0 spinodes=0
    data     =                       bsize=4096   blocks=107771904, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =Intern                 bsize=4096   blocks=52623, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =keine                  extsz=4096   blocks=0, rtextents=0
    Datablocks changed from 107771904 to 120879104.

Wednesday, December 27, 2017

Relocate defect HDD Sectors of an iMac 27" Fusion Drive

I was facing a problem in one of my VMWare Fusion VMs I used for many years on an encrypted OSX High Sierra Fusion Drive (1TB HDD, 120GB SDD) on my 27" iMac. It led to VM crashes and HDD I/O error were reported in the OSX logs when I tried to reinstall a program in the VM.

I used SMARTReporter to perform a SMART long test, which failed with a SMART Value 197 (Current_Pending_Sector) count of 40, which means the drive internal SMART logic detected 40 defect sectors.

Note: You can test the SMART status of your drives either by using the OSX SMARTReporter tool (see App Store), or by installing the smartmontools using HomeBrew and performing "smartctl -a /dev/disk1" (disk1 is the HDD in my case. Use "diskutil list" to get your drive nodes).

A few people think the drive has some "magic" to relocate defect sectors automatically. WRONG. Modern SATA HDDs relocate defect sectors to spare sectors only on WRITE to such a defect sector.

So my question was: How can I detect and forcibly write to such a defect sector, so the HDD relocates the sector to a good one?

Using OSX High Sierra tools, I didn't find any possibility to write the sectors.
Therefore I booted Ubuntu from a prepared USB Stick and repaired the defect sectors there.
 
Prerequisites: Install HomeBrew if you do not have it installed in your OSX already.

Note:
The next steps can also be performed if your drive has defect sectors and you run any other OS (Linux / Windows, etc). If you are on Linux, you can skip Steps 1-2.

Warning!
You are writing dummy data to your HDD, so expect data loss. You perform the next steps on your own risk.
MAKE A BACKUP OF YOUR DRIVE TO ANOTHER DRIVE (TimeMachine, CarbonCopyCloner, etc.) before you start these steps. YOU ARE WARNED.

It may be wise to send your Mac to repair, or to replace the defective HDD with a new one instead of performing the next steps.

1. Install Ubuntu on an USB Stick

See https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos how to create such a bootable USB Stick

2. Boot Ubuntu from the USB Stick

For this, restart your Mac, press the Option (alt) Key until the boot tone occurs and select the "Efi" Drive. In the Ubuntu boot loader, start the Live version.
Note: If you use Magic Mouse / wireless keyboard, they are not automatically connected to Ubuntu. Best is you connect an USB keyboard and pair the Bluetooth Keyboard / Mouse to Ubuntu (See Preferences > Bluetooth)

3. Install needed tools in Ubuntu

We need some tools to fix the hdd. Open a terminal and enter:

    sudo bash
    apt install smartmontools sg3-utils

4. Start SMART long selftest

    #> smartctl -t long /dev/sda

5. Check SMART test progress/errors


    #> smartctl -a /dev/sda

If it prints something like this, it detected sector errors:

# 1 Extended offline Completed: read failure 90% 25836 1370708040

There we see the defect sector around LBA 1370708040.

Note:
   This mustn't be the exact error location, therefore we check this sector and the others behind it.

6. Try to read the defect sector

    #> hdparm --read-sector 1370708040 /dev/sda

Alternatively, use the sg-utils to read the sector:

       #> sg_verify --lba=1370708040 /dev/sda

If this reads something like "bad/missing sense data" or errors like this, the sector is defect.
So it is time to write the sector, so the HDD relocates the bad sector to a good one (Data at that sector is definitely lost)

4. Write bad sector (Relocate sector)

If 6. lead to sense errors, we try to write the defect sector, so it gets relocated:

    #> hdparm --yes-i-know-what-i-am-doing --write-sector 1370708040 /dev/sda

5. Check sectors behind the defect one

It is a good idea to check the sectors behind the defect one for errors as well, as there is a good chance they are defect as well.

    #> hdparm --read-sector 1370708041 /dev/sda


Repeat steps 5 and 4 on incremented sector numbers as long as there are no more unreadable sectors.
When you reach a zone where the sectors are ok again, repeat step 1 until smart checked the whole disk.

6. Boot into OSX and perform Disk repair


Reboot into OSX, start the Disk Utility and check the drive for errors.
If OSX reports unrepairable errors, you can try to fix it by performing an fsck in single-user mode. At this stage, you are out of scope of this document.

Good luck.


Monday, February 20, 2017

Import GitHub Enterprise into VMware vCenter 6.5

I wanted to try GitHub Enterprise 2.8.7 in my vCenter 6.5 env, but the OVF import always cancelled with an error message that the ProductInfo is not allowed in the envelope.

It seems the GitHub OVF template was created with a fairly old ovftool.

Fix:

1. Unpack the OVF (it's a ZIP file)
2. Edit the .ovf file and move the "ProductSection" XML Element to the <VirtualSystem> node. See this Gist.
3. Afterwards, re-compute the SHA1 fingerprint of the .ovf file and update the .md file with the new fingerprint
4 Re-package the files into a new .ovf zip archive.

After this change, it worked here to import he GH Enterprise OVF.


Wednesday, October 19, 2016

Grails 3 quartz-plugin with Clustering Support

If you need to run Quartz in Grails 3 on a clustered Application Server environment, you must change the default config so it is Cluster aware. Otherwise, each Job on each node runs independently.

1. Create the DB Tables for Quartz

This was quite hard and I needed to dig into the Quartz Library Source Code to get a Schema for Mysql with InnoDB (which had a typo..). I then created a migration file for the Grails database-migration plugin. 
Just copy this migration file into your grails-app/migration directory and register it in changelog.groovy


2. Configure database-migration plugin

Next, you need to tweak the database-migration config, so it ignores the Quartz tables. Otherwise,  it would drop the tables with the next dbm_gorm_diff run. Example for application.groovy:

grails.plugin.databasemigration.excludeObjects = ['QRTZ_BLOB_TRIGGERS','QRTZ_CALENDARS', 'QRTZ_CRON_TRIGGERS', 'QRTZ_FIRED_TRIGGERS', 'QRTZ_JOB_DETAILS', 'QRTZ_LOCKS', 'QRTZ_PAUSED_TRIGGER_GRPS', 'QRTZ_SCHEDULER_STATE', 'QRTZ_SIMPLE_TRIGGERS', 'QRTZ_SIMPROP_TRIGGERS', 'QRTZ_TRIGGERS']


3. Configure quartz-plugin


Next, you need to configure the Grails Quartz Plugin to use the jdbc store, and enable clustering.

4. Test clustering

Startup your application. You must see such message:

  Using job-store 'org.springframework.scheduling.quartz.LocalDataSourceJobStore' - which supports persistence. and is clustered.


Friday, October 14, 2016

Grails 3.x Spring Basic Authentication with JSON handling

If you need to secure a JSON Api using Basic Authentication via HTTPS, you need to tweak the Spring Security configuration and use custom beans to support JSON / HTML error responses.

If possible, use a more sophisticated authentication scheme for REST Apis, e.g. the spring-security-rest Grails plugin, which supports token based authentication (OAUTH like).

If you still need to support Basic Auth for your Grails Rest API (e.g. server-to-server communication), read on.

Goals

  1. Support Basic Auth only on the REST Api Urls, use default (web based) Authentication on all other Urls to be secured
  2. As the REST Api is stateless, no sessions should be created when accessing the Api
  3. If Authentication or Authorization errors occur, the authenticator should return JSON error blocks back if accessed with a json Content-Type, and HTML errors if the Api was accessed by a Browser (e.g. for debugging or documentation purposes)

Implementation Details


1. CustomBasicAuthenticationEntryPoint:


import groovy.transform.CompileStatic
import org.springframework.security.core.AuthenticationException
import org.springframework.security.web.authentication.www.BasicAuthenticationEntryPoint

import javax.servlet.ServletException
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse

/**
 * AuthenticationEntryPoint for BasicAuthentication.
 * Triggered if user is not (successfully) authenticated on a secured Basic Auth URL resource.
 * Maps all errors to 401 status code and returns a HTML or JSON error string dependent on the request content type.
 * Also, sends a Basic Auth Challenge header (if accessing via Browser for test purposes, to show the login popup)
 *
 * Author: Robert Oschwald
 * License: Apache 2.0
 *
 */
@CompileStatic
public class CustomBasicAuthenticationEntryPoint extends BasicAuthenticationEntryPoint {

  @Override
  public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException)
    throws IOException, ServletException {

    String errorMessage = authException.getMessage()
    int statusCode = HttpServletResponse.SC_UNAUTHORIZED

    response.addHeader("WWW-Authenticate", "Basic realm=\"${realmName}\"")

    if (request.contentType == "application/json") {
      log.warn("Basic Authentication failed (JSON): ${errorMessage}")
      response.setContentType("application/json")
      response.sendError(statusCode, "{error:${HttpServletResponse.SC_UNAUTHORIZED}, message:\"${errorMessage}\"")
      return
    }

    // non-json request
    response.sendError(statusCode, "$statusCode : $errorMessage")
  }

}

2. CustomBasicAuthenticationAccessDeniedHandlerImpl:


import groovy.transform.CompileStatic
import org.springframework.security.access.AccessDeniedException
import org.springframework.security.web.access.AccessDeniedHandlerImpl
import javax.servlet.ServletException
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse
/**
 * Basic Auth Extended implementation of 
 * {@link org.springframework.security.web.access.AccessDeniedHandlerImpl}.
 * Maps errors to a 403 status code and returns a HTML or JSON error string dependent on the request content type.
 * Author: Robert Oschwald
 * License: Apache 2.0
 */
@CompileStatic
class CustomBasicAuthenticationAccessDeniedHandlerImpl extends AccessDeniedHandlerImpl {

  @Override
  public void handle(HttpServletRequest request, HttpServletResponse response, AccessDeniedException accessDeniedException) throws IOException, ServletException {
    String errorMessage = accessDeniedException.getMessage()
    int statusCode = HttpServletResponse.SC_FORBIDDEN
    if (request.contentType == "application/json"){
      response.setContentType("application/json")
      response.sendError(statusCode, "{error:${statusCode}, message:\"${errorMessage}\"")
      return
    }
	// non-json request
    response.sendError(statusCode, "$statusCode : $errorMessage")
  }
}

3. grails-app/conf/spring/resources.groovy:


  // No Sessions for Basic Auth  
  statelessSecurityContextRepository(NullSecurityContextRepository) {}

  // No Sessions for Basic Auth
  customBasicRequestCache(NullRequestCache)
  
  statelessSecurityContextPersistenceFilter(SecurityContextPersistenceFilter, ref('statelessSecurityContextRepository')) {}

  statelessSecurityContextPersistenceFilterDeregistrationBean(FilterRegistrationBean){
    filter = ref('securityContextPersistenceFilter')
    // To prevent Spring Boot automatic filter bean registration in the ApplicationContext
    enabled = false
  }

  /**
   * Sends HTTP 401 error status code + HTML/JSON error in body dependent on the request type
   * if user is not authenticated, or if authentication failed.
   */
  customBasicAuthenticationEntryPoint(CustomBasicAuthenticationEntryPoint) {
    realmName = SpringSecurityUtils.securityConfig.basic.realmName
  }

  /**
  * Sends HTTP 403 error status code + HTML/JSON error in body dependent on the request type
  * if user is authenticated, but not authorized.
  */
  basicAccessDeniedHandler(CustomBasicAuthenticationAccessDeniedHandlerImpl)
  
  customBasicAuthenticationFilter(BasicAuthenticationFilter, ref('authenticationManager'), ref('customBasicAuthenticationEntryPoint')) {
    authenticationDetailsSource = ref('authenticationDetailsSource')
    rememberMeServices = ref('rememberMeServices')
    credentialsCharset = SpringSecurityUtils.securityConfig.basic.credentialsCharset // 'UTF-8'
  }

  /** 
  * basicExceptionTranslationFilter with customBasicRequestCache (no Sessions)
  * The bean name is used in Spring-Security by default.
  */
  basicExceptionTranslationFilter(ExceptionTranslationFilter, ref('basicAuthenticationEntryPoint'), ref('customBasicRequestCache')) {
    accessDeniedHandler = ref('basicAccessDeniedHandler')
    authenticationTrustResolver = ref('authenticationTrustResolver')
    throwableAnalyzer = ref('throwableAnalyzer')
  }

4. Configure the Spring Security Core plugin in grails-app/conf/application.groovy:


// Spring Security Core plugin
grails {
  plugin {
    springsecurity {
	  securityConfigType = "InterceptUrlMap" // if using the chainmap in application.groovy. If you prefer Annotations, omit.
	  auth.forceHttps = true
	  useBasicAuth = true // Used for /api/ calls. See chainMap.
	  basic.realmName = "App Authentication"
	  // enforce SSL
	  secureChannel.definition = [
	     [pattern:'/api', access:'REQUIRES_SECURE_CHANNEL'] // strongly recommended
		 // your other secureChannel settings
	  ]
	  filterChain.chainMap = [
        // For Basic Auth Chain:
        // - Use statelessSecurityContextPersistenceFilter instead of securityContextPersistenceFilter,
        // - no exceptionTranslationFilter
        // - no anonymousAuthenticationFilter
        // As springsec-core does not support (+) on JOINED_FILTERS yet, we must state the whole chain when adding our basic auth filters. See springsec-core #437.
        [pattern:'/api/**', filters: 'securityRequestHolderFilter,channelProcessingFilter,statelessSecurityContextPersistenceFilter,logoutFilter,authenticationProcessingFilter,customBasicAuthenticationFilter,securityContextHolderAwareRequestFilter,basicExceptionTranslationFilter,filterInvocationInterceptor'], // Use BasicAuth
        [pattern:'/**',filters:'JOINED_FILTERS,-statelessSecurityContextPersistenceFilter,-basicAuthenticationFilter,-basicExceptionTranslationFilter'] // normal auth
	  ]
	  interceptUrlMap = [
		[pattern:'/api/**', access:['ROLE_API_EXAMPLE']],
		[pattern:'/**', access:['ROLE_USER']]
	  }
	}
  }
}

5. UrlMappings definition

For the example above, you need to map your Api Controllers to /api/ in UrlMappings.groovy.





Thursday, October 6, 2016

Fortinet Route Based VPN with overlapping Networks

The other day I needed to establish an IPSEC VPN on a Fortinet 60D with Source NAT for an overlapping Subnet scenario. The remote subnet was the same as our local one.

I only found Policy Based examples in the Fortinet kb, so I tested it myself using a route based VPN.

The trick is to create an IP-Pool with the source NAT Subnet range, e.g. 192.168.99.0/24
This subnet is then presented to the remote IPSEC VPN (Proxy-ID) during IPSEC Phase 2 negotiation.

Whenever you access remote resources via the VPN, your local subnet IP (e.g. 192.168.1.2) is translated 1:1 into the IP-Pool subnet address (192.168.99.1) before entering the VPN.

1. create a IP Pool (Policy & Objects > IP Pools > Create New) with the following settings:
  • Type: Overload
  • Range: 192.168.99.0 - 192.168.99.255
  • ARP Reply: checked
2. Create your route based VPN (e.g. using the wizard). Type is "custom".
In Phase2:

  • Use your IP-Pool Subnet address (the source NAT translated one created in 1.)
  • Add all remote Subnets needed as Proxy-IDs. 
3. Add static routes for all remote subnets (Network > Static Routes):
  • Destination: Subnet
  • Subnet specification, e.g. 192.168.243.0/24
  • Device: <Tunnel Interface for the VPN>
  • Administative Distance: 10
4. Create Address Entries for local and remote subnets. If you use the VPN wizard, these entries are created automatically. If you configure the VPN manually or on the CLI, you must create address book entries on your own:
  • Create one entry for your local internal network, e.g: 192.168.1.0/24
  • Create entries for all remote subnets
5. Create a policy (Policy & Objects > IPv4 Policy > Create New:
  • Incoming Interface: internal
  • Outgoing Interface: <Tunnel Interface for the VPN>
  • Source: <Your local internal network Address entry created in 4.>
  • Destination Address: <remote network address definition(s) created in 4.>
  • Schedule: always
  • Service: ALL
  • Action: ACCEPT
  • NAT: enable
  • Fixed Port: disable
  • IP Pool Configuration: "Use Dynamic IP Pool". Select your Source-NAT IP Pool defined in 1.
  • Enable this policy: enabled
6. Test your communication to the remote subnet(s).


Friday, April 10, 2015

XCode 6.2 with IOS8.3 devices (Swift 1.1 / 1.2 problem)

If you need to debug Apps on an IOS 8.3 device, you must use XCode 6.3.

If you are in the situation that you have this very important Swift 1.1 based application to show your customer now, and not the time yet to migrate it to Swift 1.2, you must stick to XCode 6.2. But that does not work. You receive a "Device not eligible" error or "platform directory not found" error.

To debug / deploy your Swift 1.1 application to an IOS 8.3 device with XCode 6.2, there is a workaround.

1. Archive old XCode 6.2

In Finder, go to /Applications and archive Xcode.app. This is an important step, as we need to unpack it after the upgrade to XCode 6.3

2. Update XCode to 6.3

Upgrade XCode to 6.3 using the App Store application.

3. Rename XCode 6.3

After the upgrade, rename Xcode.app to Xcode6.3.app

4. Unpack XCode 6.2

Now unpack the zip file created in Step 1. Afterwards, you have 2 Xcode applications in /Applications, the old Xcode.app (6.2) and Xcode6.3.app

5. symlink IOS 6.3 Device Support into Xcode 6.2

Open Terminal.app and enter:

  cd /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/ 

 ln -s /Applications/Xcode6.3.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/8.3\ \(12F69\)/ 

 sudo chown -R root:wheel /Applications/Xcode.app

This sym-links the IOS 6.3 platform directory from Xcode 6.3 into Xcode 6.2.

6. Start Xcode 6.2 and run your app on an IOS 8.3 device 

Start /Applications/Xcode.app and try to run your application on an IOS 8.3 device. If you still receive the "Device not eligible" error, click on  Product > Destination > "Your Iphone" and try again.
It might be possible that you need to issue new provisioning profiles the first time you run the app on IOS 6.3.

7. select the command line tools

If you use Carthage, you may perform xcode-select to select the Xcode 6.2 build tools, otherwise your Carthage dependencies fail to compile. Do not forget to switch it back to 6.3 if needed.

#> sudo xcode-select -p   # print currently selected xcode commandline tools
#> sudo xcode-select -s /Applications/Xcode.app/Contents/Developer



Note:
For sure the best fix is to migrate your Swift 1.1 application to Swift 1.2 asap and work with XCode 6.3.


Friday, October 24, 2014

Auto-connect OSX IPSEC VPN on system boot / wakeup

If you have OSX 10.10 (Yosemite) or higher  installed and need to automatically (re-) connect a VPN connection on system boot or wakeup, read on.

For a headless remote OSX machine, I needed to setup automatic VPN connection so the remote device is always accessible via VPN.


1. create LaunchDaemon plist file
sudo vi /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


content:

<?xml version="1.0" encoding="UTF-8"?>  
 <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">  
 <plist version="1.0">  
  <!--  
    See http://roosbertl.blogspot.com  
    Auto-connect to named OSX VPN when network is reachable.   
    This LaunchDaemon monitors the state of the given VPN configuration.  
    If the VPN is disconnected, it pings an internet host, first (www.google.com)  
    Then it establishes the VPN again.  
    Note: using scutil to connect, as "networksetup" does not work on Yosemite to reconnect a VPN  
    Based on plist by patrix   
    http://apple.stackexchange.com/questions/42610/getting-vpn-to-auto-reconnect-on-connection-drop  
    Config:  
      1. Replace "VPN (Cisco IPSec)" below with your VPN connection name as configured in system prefs  
      2. Install this file in /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist (sudo)   
      3. Set permissions  
       sudo chown root:wheel /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
       sudo chmod 644 /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
      4. activate/update with:  
      sudo launchctl unload -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
      sudo launchctl load -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
   -->  
  <dict>  
   <key>Label</key>  
   <string>org.roosbertl.osxvpnautoconnect</string>  
   <key>ProgramArguments</key>  
   <array>  
    <string>bash</string>  
    <string>-c</string>  
    <string>(test $(networksetup -showpppoestatus "VPN (Cisco IPSec)") = 'disconnected' &amp;&amp; echo "Re-Connecting VPN (Cisco IPSec).." &amp;&amp; ping -o www.google.com &amp;&amp; scutil --nc start "VPN (Cisco IPSec)") ; sleep 10</string>  
   </array>  
   <key>RunAtLoad</key>  
   <true/>  
   <key>KeepAlive</key>  
   <true/>  
  </dict>  
 </plist>  

2. set  permissions

sudo chown root:wheel /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 
sudo chmod 644 /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


3. activate

sudo launchctl load -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


Thursday, March 6, 2014

Oracle Jaxb Maven Artifact mess...

Today I wanted to upgrade jaxb-xjc from 2.1.5 to 2.1.16 and got the error

Could not find group:com.sun.xml.bind, module:jaxb-core, version:2.1.16.

After digging into mavenrepository.com, there wasn't a jaxb-core 2.1.16 available.
I first thought the usual Sun / Oracle "download our RI zip to get the artifacts" game.
Downloaded jaxb-ri-2_1_16.zip from https://jaxb.java.net/downloads/ri/ and unpacked it.

No jaxb-core.jar in the zip...

Then I found bug report https://java.net/jira/browse/JAXB-984

They messed up all the newer Jaxb 2.1.x version pom files. Bug seems to be partially resolved, only, as they closed it without fixing 2.1.16 (and some other versions).

Thats a "reference implementation" I like a lot...




Saturday, October 19, 2013

Grails database-migration-plugin: DB independent diff files

If you are using Grails database-migration-plugin and your application has to support MySQL as well as Oracle, you have 2 choices currently. As the underlying Liquibase library is currently unable to create real database-agnostic migration files when performing a diff, you can:

  • create 2 different sets of migration files, one for MySQL one for Oracle. Drawback of this is, that this is error prone and anything else than DRY.
  • Convert the created migration files automatically so they are real database agnostic.
Thanks to the Grails database-migration-plugin hooks (when using database-migration plugin version >= 1.3), we can do that automatically on initial start after creating a new migration file. Migration files are only migrated once, and migrated files will be marked with a special comment to indicate conversion.

In changelog.groovy, define all types you want to use for Oracle and MySQL (you can extend that to support other db types, easily):

databaseChangeLog = {
  
  /*
    DATABASE SPECIFIC TYPE PROPERTIES
   */
  property name: "text.type", value: "varchar(50)", dbms: "mysql"
  property name: "text.type", value: "varchar2(500)", dbms: "oracle"

  property name: "string.type", value: "varchar", dbms: "mysql"
  property name: "string.type", value: "varchar2", dbms: "oracle"

  property name: "boolean.type", value: "bit", dbms: "mysql"
  property name: "boolean.type", value: "number(1,0)", dbms: "oracle"

  property name: "int.type", value: "bigint", dbms: "mysql"
  property name: "int.type", value: "number(19,0)", dbms: "oracle"

  property name: "clob.type", value: "longtext", dbms: "mysql"
  property name: "clob.type", value: "clob", dbms: "oracle"

  property name: "blob.type", value: "longblob", dbms: "mysql"
  property name: "blob.type", value: "blob", dbms: "oracle"

  /* DATABASE SPECIFIC FEATURES */
  property name: "autoIncrement", value: "true", dbms: "mysql"
  property name: "autoIncrement", value: "false", dbms: "oracle"


  /* Database specific prerequisite patches */
  changeSet(author: "changelog", id: "ORACLE-PRE-1", dbms: "oracle") {
    createSequence(sequenceName: "hibernate_sequence")
  }

  /* Patch files */  
  include file: 'initial.groovy'

}

Then create a Callback Bean class for database-migration-plugin and register it in resources.groovy:

migrationCallbacks(DbmCallbacks)

Bean:

import liquibase.Liquibase
import liquibase.database.Database
import org.codehaus.groovy.grails.plugins.support.aware.GrailsApplicationAware;
import org.codehaus.groovy.grails.commons.GrailsApplication

class DbmCallbacks implements GrailsApplicationAware {
  private static final String MIGRATION_KEY = "AUTO_REWORKED_MIGRATION_KEY"
  private static final String MIGRATION_HEADER = "*/ ${MIGRATION_KEY} */"
  // DB-Specific types to liquibase properties mapping
  // see changelog.groovy for defined liquibase properties
  Map<String,String> liquibaseTypesMapping = [
          // start with specific ones, then unspecific ones.
          'type: "varchar(50)"': "type: '\\\${text.type}'",
          'type: "varchar2(500)"': "type: '\\\${text.type}'",
          'type: "varchar"': "type: '\\\${string.type}'",
          'type: "varchar2"': "type: '\\\${string.type}'",
          'type: "bit"': "type: '\\\${boolean.type}\'",
          'type: "number(1,0)"': "type: '\\\${boolean.type}'",
          'type: "bigint"': "type: '\\\${int.type}'",
          'type: "number(19,0)"': "type: '\\\${int.type}'",
          'type: "longtext"': "type: '\\\${clob.type}\'",
          'type: "clob"': "type: '\\\${clob.type}\'",
          'type: "longblob"': "type: '\\\${blob.type}\'",
          'type: "blob"': "type: '\\\${blob.type}\'",
          // regEx (e.g. "varchar(2)" to ${string.type}(2)'. Do not add trailing "'", here!
          '/.*(type: "varchar\\((.*)\\)").*/': "type: '\\\${string.type}",
          '/.*(type: "varchar2\\((.*)\\)").*/': "type: '\\\${string.type}",
          // db features
          'autoIncrement: "true"': "autoIncrement: '\\\${autoIncrement}'"
  ]

 void beforeStartMigration(Database database) {
   reworkMigrationFiles()
 }
 private void reworkMigrationFiles() {
    def config = grailsApplication.config.grails.plugin.databasemigration
    def changelogLocation = config.changelogLocation ?: 'grails-app/migrations'
    new File(changelogLocation)?.listFiles().each { File it ->
      List updateOnStartFileNames = config.updateOnStartFileNames
      if (updateOnStartFileNames?.contains(it.name)) {
        // do not convert updateOnStart files.
        return
      }
      convertMigrationFile(it)
    }
  }
 private void convertMigrationFile(File migrationFile) {
    def content = migrationFile.text
    if (content.contains(MIGRATION_KEY)) return
    liquibaseTypesMapping.each {
      String pattern = it.key
      String replace = it.value
      if (pattern.startsWith('/')) {
        // Handle regex pattern.
        def newContent = new StringBuffer()
        content.eachLine { String line ->
          def regEx = pattern[1..-2] // remove leading and trailing "/"
          def matcher = (line =~ regEx)
          if (matcher.matches() && matcher.groupCount() == 2) {
              String replaceFind = matcher[0][1] // this is the found string, e.g. 'type: "varchar(22)"'
              String replacement = "${replace}(${matcher[0][2]})\'"  // new string, e.g. "type: '${string.type}(22)' "
              line = line.replace(replaceFind, replacement)
          }
          newContent += "${line}\n"
        }
        content = newContent
      } else {
        // non-regEx, so replace all in one go.
        content = content.replaceAll(pattern, replace)
      }
    }
    // mark file as already migrated
    content = "${MIGRATION_HEADER} +"\n"+ content
    migrationFile.write(content, 'UTF-8')
    log.warn "*** Converted database migration file ${migrationFile.name} to be database independent"
  }


This for sure can be optimized (e.g. use only regEx definitions in the map and handle if no matcher groups are found, but it does it's job. 

Tested with MySQL and Oracle 11.0.2 XE.


Building 64bit TrueCrypt for OSX

Currently, TrueCrypt binaries are only available for PPC and i386 without any hardware accelleration.
Also, the available binaries are currently under suspect, as nobody knows if they were compiled from the official source code or if they were tampered by someone. (hick..).

A project tries to get funded to audit the TrueCrypt sources and binaries for any hidden backdoors: http://istruecryptauditedyet.com. The german C't magazine tried to rebuild the Windows binaries from the source code and found some suspect differences while comparing the binaries. See here [english translation] [original article in german].

To ensure at least you do not use tampered binaries, you can use this script to generate a 64bit OSX version from the TrueCrypt sources with hardware accellerated AES functions yourself. (Idea and patches see this Blog post).


#!/bin/sh
# Build TrueCrypt on OSX with 64bit and HW acc. AES
# 2013 http://roosbertl.blogspot.com
####
version=7.1a
md5="102d9652681db11c813610882332ae48"
sourcename="TrueCrypt ${version} Source.tar.gz"
####
download_filename="TrueCrypt%20${version}%20Source.tar.gz"
which /opt/local/bin/port &>/dev/null
if [ $? != 0 ]; then
echo "Port seems not to be installed."
echo "Please install www.macports.org, first" 
exit 1
fi
currDir=`pwd`
workDir="$0.$$"
echo "Creating TrueCrypt $version"
mkdir $workDir
trap "echo cleaning up; cd $currDir; rm -rf $workDir ; exit" SIGHUP SIGINT SIGTERM
echo "Getting required Ports.."
sudo port install wxWidgets-3.0 fuse4x nasm wget pkgconfig
sudo port select wxWidgets wxWidgets-3.0
echo " "
echo "Downloading $sourcename"
wget --quiet http://cyberside.planet.ee/truecrypt/$download_filename
echo "Checking md5.."
thisMd5=`openssl md5 < $sourcename | cut -d " " -f 2`
if [ ! "$md5" = "$thisMd5" ]; then
echo "MD5 checksum $thisMd5 does not match expected MD5 checksum $md5"
echo "Either the source file was modified or you tried to download a different version"
echo "FATAL ERROR. Aborting."
exit 1
else
echo "Checksum is ok."
fi
echo "Extracting '$sourcename'"
tar zxf "$sourcename"
cd truecrypt-${version}-source
echo "Getting Patch file.."
wget --quiet http://www.nerdenmeister.org/truecrypt-osx.patch
mkdir Pkcs11
cd Pkcs11
echo "Getting pkcs11 headers.."
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h
cd ..
echo "Patching TrueCrypt for 64bit and HW accellerated AES.."
patch -p0 < truecrypt-osx.patch
echo "Compiling..."
make -j4
echo "Compile done."
mv Main/TrueCrypt.app ..
echo "Cleanup.."
cd $currDir
rm -rf $0.$$
echo "Done creating TrueCrypt.app Version: $version"
# end





Wednesday, July 31, 2013

jMeter-Server on OSX

If you want to run a jmeter-server unattended on one or several OSX boxes, you can perform this:

1. create /Library/LaunchAgents/org.apache.jmeter.server.plist


#>sudo vi /Library/LaunchAgents/org.apache.jmeter.server.plist


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>LimitLoadToSessionType</key>
<string>System</string>
<key>KeepAlive</key>
<true/>
<key>Label</key>
<string>org.apache.jmeter.server.plist</string>
<key>Program</key>
<string>/Applications/JMeter-2.9.app/Contents/Resources/bin/jmeter-server</string>
<key>WorkingDirectory</key>
<string>/var/log</string>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>

Program path is  the path to the jmeter-server script. In the example above, I bundled jmeter 2.2 with Jar Bundler into an OSX app, added all jmeter folders to Contents/Resources (bin, lib folders) so I start the jmeter-server from the bundle app on several remote OSX boxes.

2. Load the plist file in launchctl:



# sudo launctl load /Library/LaunchAgents/org.apache.jmeter.server.plist

This should immediately start the jmeter-server with working directory set to /var/log (to get the jmeter-server.log logged in the system log dir)

3. Register remote jmeter-servers in jMeter

To register the jmeter-server instances in your local jMeter program, edit bin/jmeter.properties and edit the property "remote_hosts". Add your remote jmeter-servers by comma-separating the IP adresses. Example:

remote_hosts=127.0.0.1,192.168.17.12