Get interface list and MAC addresses on Solaris

For CA Nimsoft Monitor, I had to implement the ability to get a list of ethernet interfaces and corresponding MAC addresses on Solaris. The basic process is:

  • Create a socket to be used for ioctl() calls
  • Use SIOCGIFCONF ioctl() call to get list of interfaces
  • For each interface, get name and IP address from the SIOCGIFCONF result
  • For each interface, use SIOCGARP ioctl() call to get the MAC address

Here’s the result:

void get_interfaces() {
char buf[8192] = {0};
struct ifconf ifc = {0};
struct ifreq *ifr = NULL;
int sck = 0;
int nInterfaces = 0;
int i, j;
char ip[INET6_ADDRSTRLEN] = {0};
char macp[19];
struct ifreq *item;
struct sockaddr *addr;
struct arpreq arpreq;

/* Get a socket handle. */
if ( sck = socket(AF_INET, SOCK_DGRAM, 0) ) {
/* Query available interfaces. */
ifc.ifc_len = sizeof(buf);
ifc.ifc_buf = buf;

if( ioctl(sck, SIOCGIFCONF, &ifc) == 0 ) {
/* Iterate through the list of interfaces. */
ifr = ifc.ifc_req;
nInterfaces = ifc.ifc_len / sizeof(struct ifreq);

for(i = 0; i < nInterfaces; i++) { item = &ifr[i]; addr = &(item->ifr_addr);

/* The interface name: .... = strdup(item->ifr_name); */

/* Get the IP address */
if( ioctl(sck, SIOCGIFADDR, item) >= 0 ) {
if (inet_ntop(AF_INET, &(((struct sockaddr_in *)addr)->sin_addr), ip, sizeof ip) == NULL)
continue;

/* The IP address: .... = strdup(ip); */

/* Get the MAC address */
memcpy(&arpreq.arp_pa,addr,sizeof(struct sockaddr));
if (ioctl(sck,SIOCGARP,(char*)&arpreq) == 0) {
sprintf(macp, "%02x:%02x:%02x:%02x:%02x:%02x",
(unsigned char)arpreq.arp_ha.sa_data[0],
(unsigned char)arpreq.arp_ha.sa_data[1],
(unsigned char)arpreq.arp_ha.sa_data[2],
(unsigned char)arpreq.arp_ha.sa_data[3],
(unsigned char)arpreq.arp_ha.sa_data[4],
(unsigned char)arpreq.arp_ha.sa_data[5]);

/* The formatted MAC address: .... = strdup(macp); */
}
}
}
}

close(sck);
}
}

Uncategorized

Comments (0)

Permalink

git global ignores

Set up a global ignores file for all git repos:

git config --global core.excludesfile ~/.gitignore-global

I use .gitignore-global instead of .gitignore since my home dir is already a git repo with its own .gitignore.

Thanks to Programblings.

Uncategorized

Comments (0)

Permalink

Disable Aero Peek when using x2vnc

I’ve been using VNC and x2vnc to drive my Windows 7 desktop from my linux workstation, so only one keyboard and mouse is needed. It works great except for one major annoyance: when the mouse is not on the Win7 desktop, Windows would hide all active windows, showing the desktop background only.

Thanks to this post by Mario Vilas, I realized this is due to the ‘Aero Peek‘ feature. His approach was to modify x2vnc to provide a work-around. My approach was just to disable Aero Peek: Right click on the taskbar –> Properties –> uncheck “Preview desktop with Aero Peek” in the Taskbar tab.

BTW – I had tried using synergy instead of VNC/x2vnc, but it really didn’t work as well. This was the only problem with the x2vnc solution, but finally that is solved.

 

Uncategorized

Comments (0)

Permalink

GNU/Linux – dispersed development yields a complex functional aggregate

Pedro Côrte-Real analyzed the makeup of the source tree for the latest Ubuntu distribution, 11.04 ‘Natty Narwhal’:

Total LOC split by project in Ubuntu natty's main repository

From his analysis:

t seems that when it comes to modern Linux-based distributions the tendency has been for the distribution to be the organization point of a highly dispersed set of software sources. No single project accounts for more than 10% of the total and a complete modern system is only formed by this aggregation.

It’s at least very cool, if not amazing, how the dispersed efforts of many developers have led to this aggregate system. A modern technological marvel.

 

Uncategorized

Comments (0)

Permalink

Updated home workstation to Ubuntu 11.04 Natty Narwhal

I updated my home workstation / server to Ubuntu 11.04 ‘Natty Narwhal’. It wasn’t the worst upgrade ever, but did take me several hours beyond what I had planned. I’m pleased with the end result, however. I never even bothered with Unity, though. Sticking with the Gnome 2 classic desktop.

The main problem I ran into was the X11 configuration for my two-monitor setup. It’s based on a GeForce 7600 GS card, configured with TwinView for a single 3840×1080 desktop. After the upgrade, the desktop was seriously hosed – I’m not sure I can even describe it. The left monitor was blank, the right monitor had the gnome-panel at the top but was completely unresponsive. I could move the mouse seemlessly across both screens, but it didn’t seem to register clicks, the apps that were running were nowhere to be seen. After multiple reboots, I finally figured out that if I moved the mouse to the left screen (which displayed nothing), it registered clicks on what was displaying on the right screen, offset by a half monitor width. Very strange, obviously unusable.

Not sure it was entirely related, but there was also some issues with the (new?) ‘nouveau‘ driver for my nvidia card, conflicting with the proprietary nvidia drivers I had previously installed. The x-server was running, but not correctly, so it was really not obvious what was going on. In the end, I had to uninstall all the drivers and start over from scratch with a fresh generic X11 config, and tune it up from there. I’m back to using the nvidia proprietary drivers, that seems to be working reliably so I’m not gonna mess with it.

A few other niggling problems: Pidgin randomly gets into a state where it takes 100% CPU until I kill it. TweetDeck couldn’t load my profile data until I removed some directory (didn’t bother to find out exactly what it was about): rm -rf ~/.appdata/Adobe/AIR/ELS.

On the whole it seems to be a bit snappier, but maybe that’s just because I didn’t have TweetDeck (total resource pig on Linux) running for a while. I’m tempted next time around (11.10?) to do a fresh install instead of an upgrade. Perhaps by then I’ll have some new hardware…

 

Uncategorized

Comments (1)

Permalink

The Cult of Done Manifesto

The Cult of Done Manifesto

  1. There are three states of being. Not knowing, action and completion.
  2. Accept that everything is a draft. It helps to get it done.
  3. There is no editing stage.
  4. Pretending you know what you’re doing is almost the same as knowing what you are doing, so just accept that you know what you’re doing even if you don’t and do it.
  5. Banish procrastination. If you wait more than a week to get an idea done, abandon it.
  6. The point of being done is not to finish but to get other things done.
  7. Once you’re done you can throw it away.
  8. Laugh at perfection. It’s boring and keeps you from being done.
  9. People without dirty hands are wrong. Doing something makes you right.
  10. Failure counts as done. So do mistakes.
  11. Destruction is a variant of done.
  12. If you have an idea and publish it on the internet, that counts as a ghost of done.
  13. Done is the engine of more.

 

Uncategorized

Comments (0)

Permalink

SpiderOak online backup – not what I hoped it would be

I’ve been looking for an online backup service for my home network (mix of linux and windows), and I thought SpiderOak was going to be the ticket.  I tried it out for a few days but I’m disappointed by the experience.

I was pleased with the pricing ($10/mo for up to 100GB) and multi-platform support.  There are other features, like multi-machine syncing and ‘ShareRoom’ public sharing that seem useful.  But the product is has some glaring holes that make it unusable for me.

On windows, the client has to be running continuously for backups to occur, and it is both a memory and CPU hog — not something I want to leave running all the time.  I was hoping for automatic setup as a service or something, or at least a very small-footprint process that runs continuously.

On linux, it’s the same story.  Takes a lot of resources for what it does, plus it has to run continuously.  It does provide a batch option (–batchmode) so you can call it via cron (same on windows).  It’s not terribly surprising that I have to configure that, and it’s covered in the FAQs.

I was also disappointed that it automatically traversed mounted remote filesystems, including the sshfs mounts I maintain to my hosting provider and work.  It was easy to de-select those from backup once I realized it (as long as I used the Advanced selection view), but it surprised me to find them in the backup set.

But the real issue that makes it a non-starter is security.  The files are all encrypted in transit and in storage in the cloud, so that’s not the concern.  The problem is that any files I backup on one machine are visible without a password (beyond the SpiderOak account password) on any other machine that uses the same account.  So there’s no way I’ll use SpiderOak for my linux server system files and my personal stuff and have the client running on the kids’ computers as well.

I expect those problems will be solved at some point, and I’d be happy to try again.  If I’m just missing something, I’d like to know that, too.  But for now, I’ll continue to look and hope for an online backup service that works for heterogeneous home networks… And using dirvish to backup to external drives.

Uncategorized

Comments (8)

Permalink

buildbot adventures

I’ve been working on setting up buildbot to run a simple continuous-integration process for at least some of the development work at IntelePeer.  The main goal is to get fast, automated feedback when someone pushes a commit to the git repo that breaks the build or tests.

The first step is to get buildbot to be able to build from scratch.  The initial setup wasn’t hard, but getting the build recipe right took some doing, mostly due to funky daemon account setup and associated permissions.  I finally had a successful build on attempt #16…

Next step is to get the post-receive hooks set up in the master repo to tell buildbot when changes come in.  Looks like that shouldn’t be too hard, though we do already have another post-receive hook, so I’ll be using something like this to allow multiple hooks.

Then the real issue: getting a useful, reliable test suite.  So far that’s not a big part of the developer philosophy at the company, but I’ve been making some steps to work that in.  There is at least a minimal suite for the parts I’ve written so far, and testing that is better than nothing.  There’s still a long way to go, and it’s a lot harder to build a test suite after development is done versus in phase with the main code development.

Uncategorized

Comments (0)

Permalink

Back on the rails

I recently started a new personal web project, using ruby on rails.  I’ve been doing a lot of python over the past few months in my work at IntelePeer, and this project reminds me how much I prefer ruby over python.  It all just seems cleaner, more consistent as an overall language.  I feel reasonably proficient at python now, but ruby feels like a higher-quality tool to me.  I was mentally noting 5-10 areas where I missed the ruby way when I was ramping up on python.  I should complete that and get it written some day.

In the mean time, I have to refresh my brain a bit on rails.  I’m still using 2.3, since that’s what I was familiar with when I left off doing rails stuff about a year ago.  Lots of new stuff to learn with rails 3.  Some day.

Uncategorized

Comments (2)

Permalink

Go language

Just watched “Another Go at Language Design” by Rob Pike from Google.  I had not heard of the Go language prior to this, and now I’m very intrigued.  The main points that I find interesting:

  • It’s a compiled language for speed, yet flexible Interface mechanism supports a (type-safe) duck-typing style I really miss when doing C/C++/Java
  • Goroutines + channels provide concurrency in the language
  • It’s designed to be good a system-level programming tasks

Admittedly, I’m not enamored with the syntax.  I’ve grown very fond of the aesthetics of Ruby, and Go seems like a big step backward on first impression.

I’ll add that to stack of things to learn more about.  It may be a good fit for upcoming projects…

BTW — I highly recommend the EE380 Computer Systems Colloquium, a weekly lecture series held at Stanford during the academic year.  It’s been a while since I viewed any of the presentations, and this reminded me of the interesting topics they cover.  I hope to be a regular viewer again.

Uncategorized

Comments (0)

Permalink