Thursday, November 27, 2014

Install "unsupported" Hewlett Packard printer on OSX Yosemite

I believe everyone has it's own "favorite" (but old) model of printer that meets one need, is featured as you like, but the manufacturer tends to "forget" it, at least on the software support side of it. That happened to me with Hewlett Packard M1522n MFP printer, that knows everything I need (laser printer, duplex, flatbed and feeder document scanning). Still, since OSX Lion, scan support was removed as "supported feature" from Apple provided HP drivers. That's fine, as HP "native" scanner software was working just fine on all recent OSXes (10.7, 10.8 and 10.9).

Still, on Yosemite the HP driver installer presents you a strange message "Unsupported operating system, supported versions are 10.6 and above"! Must be joking.

By the way, this reminds me of windows 9 thingie.

Luckily, there is a way to install the software: thanks to Justin's blog, one can easily force installer in belief it's about to install on 10.9 (that's what I did), which is "above" 10.6 (while 10.10.1 is seemingly not).

Word of caution: OSX knows to go hayway during this operation (Finder did it for me). So, do not reboot with changed plist, and quit any application running! What I did was prepare to launch installer (mounted the dmg), made a copy of the plist file, edited the plist by setting version to "10.9", performed the install, and immediately mv the "original" back into place (to reset the version to actual one). During install installer restarted Finder, and it went nuts (some icons did disappear). Then I verified that HP Scan still works, and did a reboot (as the plist file change did trigger some strange things).

HP Scan app still works and handles the scanner just fine on Yosemite.

Friday, July 12, 2013

M2Eclipse and unsupported plugins

Recently I got a mail from a friend, to explain a very usual pattern: his Apache Maven build uses a plugin, that is not supported by M2E (the well known "plugin execution not covered" problem). Moreover, his plugin was actually generating some source, and hence, the M2E imported project was "all red", unusable in IDE.

After exchanging few mails, I decided to publish the stuff here, just for future reference, might help someone else too. In this example I'll use the hawtbuf maven plugin, as it completely fits into this scenario: it would generate some sources, you would code against.

On M2E side to make this work, you need to install the "m2e connector for build-helper-maven-plugin". All the other changes are done in project POMs, by adding profiles that are triggered only when running in Eclipse IDE with M2E, hence, they will not stir anything in "normal" CLI triggered builds (like CI jobs or you building in console).

Also, the example assumes your POM's plugins section already contains a configuration for org.fusesource.hawtbuf:hawtbuf-protoc Maven plugin, and that all works as expected when executed from CLI (so, sources are generated,  plugin is well configured and it's execution is bound). If project is in this state, and works fine from CLI, then all you need to make it work in M2E is a bit of M2E specific profiles, here is it:


A bit of explanation.

This plugin will activate itself only when project runs within (is imported into) Eclipse/M2E, no other build, like those execute from CLI or on CI are affected (unless you explicitly enable this profile with -Pm2e which I don't think you want). Profile is activated on presence of "m2e.version" system property, that is set by M2E, see line 6.

Then, the pluginManagement section adds the lifecycle mapping plugin and targets the hawtbuf plugin (line 21-23). Here you can customise this to fit to your plugin's GAV. Line 29 tells M2E to <execute/> the plugin, which is what we want, to have generated sources available in IDE. It should be noted, that the org.eclipse.m2e:lifecycle-mapping plugin referred in plugin management is actually a "fake" plugin, it does not exists, it's purpose is solely to "sneak in" the needed information into POM, and to make M2E know from where to source it. Also, since it's in pluginManagement section, Maven will not even touch it! Basically, this structure in POM is used only to make M2E able to consume it.

Lastly, the build helper plugin is used to make M2E add the generated sources folder to the project build class path. This time, we deal with a real plugin, but it will be executed only when running project from IDE.

Hope helps and have fun!

Friday, January 06, 2012

The flawed Snapshots

Recently I hear people screaming how Maven Snapshots are bad (whatever "bad" means). "We don't want snapshot dependencies in our build!", "Snapshots introduces instabilities", yada yada yada. True, if you don't understand how they work, and as a potential consequence, you misuse them.

Before I'd start this discussion, I'd like to explain some terms I use to avoid any misunderstanding. I assume Maven (the latest stable one, not some ancient version of it) is used for development. I also assume CI is in place, and CI jobs does deploys too to MRM (no manual deploy happens). Finally, as you guessed, I assume MRM is in place too, hosting your releases and CI built snapshots (and doing some other handy things like proxying, but it's irrelevant for now).

Snapshots in general

In general, they are strange beasts indeed. If you imagine your company, and projects running within, those could be visualized more or less as concentric circles. Some of the projects might be intersecting (remember Venn diagrams from grade school?). Rarely, the project "circle" might span outside of the company too.

Usually, you want to avoid to use snapshots in cases when the "edge" (connecting Your project within one circle with the snapshot artifact's project in another circle) cross a circle boundary. They might introduce instability then.

Rule of the thumb: if you are not governing the life of a snapshot, you should not touch it (except in some exceptions, it's not all black or white as we know). "Governing" as "is your project, you are changing it, hence, your activity triggers CI to rebuild it". But, "governing" might be also "you periodically sync the sources of a foreign project in SCM, possibly apply patches, and again, implicitly you trigger the CI builds by doing this". The latter is one of the best practices when you do need a snapshot (cutting edge) of a library, but you have no direct influence over it. Is the only possible way in case you need a patched cutting edge too, or your patches are "pending", not yet applied to the project sources (by some foreign entity, since we talk about foreign project). This works if you are allowed (able) to access sources: project in question allows that, is OSS or it allows by some other reason or agreement. In this case, you would usually NOT use the SCM change trigger on the job (it depends on SCM, you could use the trigger if you maintain a forked Git repository for example), you'd trigger it rather manually after syncing the latest changes, directly influencing when snapshot is built and what changes it contains. You govern it.

In every other cases, all you should do is: avoid snapshot and use release.

Naturally, this is not always possible, so you are left with "freeze the snapshot" solution: download the snapshot binary, rename it (making it's Maven coordinates a non-snapshot, but use some distinguishing mark, as SVN revNo, Git commit hash part, or a date at least for future reference) and upload to your MRM. You can "freeze" another (newer) snapshot from time to time if needed (whenever needed).

Snapshots within an organization

Snapshot within organizations consumed by (intersecting or not) projects of same organization might fall under the "in general" case very easily. Usually they should fall in there, but it depends on many factors (can't compare a company spanning across globe and a 4 person company doing two projects easily). A two independent projects should consume each other artifacts only after they are released.

But, situation here is slightly better than the "in general" case, since the snapshot is not governed by some foreign entity, but by our colleagues. And colleagues tend to collaborate. So, you can easily ping your mate if your snapshot contains some breaking change how to adopt the consuming project code for it. Or in reverse, you can nag them to fix something in there (and have the CI build and deploy it for you). Ultimately, the CI you visit every day builds those artifacts too, you still has insight about it.

Naturally, this works in "small scale", but not so in "large scale", think companies spanning across multiple continents. "Common sense" is the best practice here, you should wage and decide which approach works the best for you.

Just to remark: "organization" here, in a way I use it does not map to organizations (in real world) like ASF is. Again, the modeling depends on actual context. In case of ASF, the "organization" would better map to a "top level project". Also, "organization" might map to a branch office only (in case of geographically distributed company), etc. In general, every "participant" in message passing from project A to project B is at least one circle. In other way: the circle boundary would mean "you can less and less influence the remote end".

Snapshots within a project

A project might contain multiple reactors, or a project (that might be called "main project(s)") might have some subordinate smaller projects (offering utilities and such) managed by same team. In this case, since both are governed by you, a similar approach should be taken as with SCM "feature branches" (or just branches in general, both short lived and long lived ones): if you pick up a story that requires a modification in a subordinate project, in your branch you modify the POM of the consuming project, you apply the needed changes (bug fix, new feature) to both, consuming and subordinate project, and finally you release the subordinate project. Meaning, merge into main project happens with a new release (or release is done shortly after the merge, irrelevant). Meaning, the subordinate project "lives" as snapshot dependency in consuming project almost as long as you have the branch living (in case of short lived branches). In case of long lived branches, you perform the release when you finish the story and it's result is accepted by main project. Ultimately, you can modify this to "a modified subordinate project lives as snapshot dependency until next release cycle", since release plugin will force you to release it anyway. But the former fits better the "release early release often" mantra, and is even better in there's more consuming projects, not only one.

A bit of digression here: versions and releases are cheap. In case of subordinate projects, you can easily end up with situations that main project "1.0" uses "1.0" of subordinate project, but "2.0" of main uses "3.4" of subordinate project (for example, because between "1.0" and "2.0" you had multiple ongoing stories affecting the subordinate project). It does not stir any water. Nobody cares, believe me.

Not all snapshots are equal

One final word of warning: you, as a human are able to deduce a lot from snapshot version (and it's "possible" or "expected" behavior), a lot more than Maven can. For Maven, it's really just black (is snapshot, handle it as such) or white (is a release). But for human is not. Some examples:

You see a snapshot versioned "". Easily, by verifying the existence of preceding release ("" in this case) might "suggest" you, that this snapshot will not contain groundbreaking or API breaking changes, is about a bug fix. Should not break your code unexpectedly (unless your functionality depends on the existence of the bug being fixed!).

You see a snapshot version "2.0.0-SNAPSHOT" (or any zero-zero one). As you guessed, usually you do expect API changes and breaking changes in here. So, these ones are to be avoided as dependencies (multiplied as many times as many "circle" boundaries it crosses). As interesting example, Lucene project uses interesting approach to versioning: as project progress, they release "1.1", "1.2", "1.2.1" etc, but as part of v2 preparation, they start publishing "1.9" and such versions, are kinda "messengers" for API and other changes upcoming in the not-yet-released "2.0". This eases adoption of new API, while it does require more knowledge about intentions of developers, so it requires more reading on Lucene site to understand where are "1.5" and other "missing" versions.

You see a snapshot versioned "1.10.0-SNAPSHOT". By checking for existence of preceding release (you find "") you assume is okay. But is not. This one might never be released. This is a well known problem in Maven world: "latest" snapshot you find in a snapshot repository might never be released. Again, this is up to you to be involved with the project/entity producing the snapshot if you have to consume (participate in meeting, subscribe to their mailing lists, forums etc).


Snapshots are not flawed, but they do need care. Sadly, developer's knowledge using them usually is.

Tuesday, December 06, 2011

There’s a time and a place for everything and it’s called college

I really really despise L18N. Especially when it is used in blunt way "just translate whatever text you see on screen". In my opinion, that's just plain wrong. Just like South Park's Chef explains about drugs, there's a time and place for localization too.

Since my first computer (good old times, 1984 with Atari 800 XL), I always used my gadgets (computers, dumb phones, smart phones, iPods, etc) Language settings set to English (while do setting Regional Settings to some continental value, since I use Metric system). And this is why: I do not understand what the fucking machine wants to tell me in Hungarian. Neither German. Or whatever else language, except English.

One typical example: my wife and I have exactly same smartphones. As I described above, mine is using English as Menu Language, while she insisted on "Hungarian menus", so she got them. One day, she realized that while using my phone, she is able to write way longer messages, but while on her phone, the text messages was about third as long (and if longer, cut into multiple messages automatically by phone)! So, she asked me "to do the same I did with my phone, since it's annoying for her to squeeze her messages in so little characters" (she's not a Twitter user either). Sure, no problem! I started wandering at her phone's menus around messaging options, and one menu did caught my eye: "Input mode" (naturally, localized in hungarian). Ok, enter here, and there was 3 options given: "Automatic" (auto-disposed, I don't like gadgets making decisions instead of me), "GSM Standard alphabet", and "Accented characters"… Hm, nothing suspicious… So I continued the search, but failed naturally. She was still able to send "short" text messages. Then I realized, and looked at my phone, same menu: "Input mode", options are "Automatic", "GSM Alphabet" and… "Unicode"! The precious translator translated the "Unicode" name into "Accented characters"! Dumb ass. That explained everything. Setting her phone to use "GSM Alphabet" solved her problem of short messages, but I bet examples like these are easily found in multiple places.

Another great example is Apple OSX. Naturally, her Mac uses Hungarian localization (available since Lion). I was frowned, that not only Finder and (those that are localized into Hungarian) application menus are in Hungarian, but Apple localized Application names too! Usually, when she asks me for help (usually I need to kill Flash plugin), I start what I do on my machine: Cmd + Space and start type "activi"(ty monitor), Spotlight brings it up, press Enter and start looking for rogue process. But not on her machine…. Spotlight does not reports "Activity Monitor" as something that exists on my wife's Mac, while we both use same OSX! Just really annoying. The application name "Activity Monitor" is localized too! This reminds me of the old Microsoft fiasco, when they "localized" Excel for Hungarian in a way, that even functions were localized too, hence, non-hungarian and hungarian spreadsheets were simply incompatible! Way too stupid. I mean, okay, localize Finder, but an OS tool???

So, just like Chef says: there's time and place for localization too. I believe if Mary (Mariska) type her email, it's okay to use menus in English (Hungarian). Same for typing in a word processor. But.

English is the language (it could be Latin or Esperanto, I don't care) is well fit for these "one word commands", like "Save", "Quit" or "Copy" and "Paste". Many times the forced one word translations are hilarious, or instead, the almost "sentence like" translations ruins the UI design. And regularly, differs the meaning they carry at least to make you wonder what the original label was. Natural languages are that "by design", your never be able to translate the perfect meaning, due to language constructs, cultural differences or because of sloppy translator, or because of all these. And you just ruin the applications doing that, and waste a lot of resources and money doing it (is waste just like some companies are suing each other instead turning that money to R&D). Again, there's time and place for doing it.

But, if you do an application used by some narrow "set" of users, like a tool for developers, or tools for IT technicians, I'd never bother localizing that. There is a "lingua franca" for them, and that's English. Just accept it as a fact.

In my opinion, Chef was right. But he was talking about drugs: "Look children: this is all I’m gonna say about drugs. Stay away from them. There’s a time and a place for everything and it’s called college." Well said!

Friday, May 20, 2011

Trick: gather outbound GETs made by Nexus for a $reason

The $reason might differ a lot, I was just curious how to do this in a "lab" environment, to have a list of URLs fetched by Nexus (that were made actually to fulfill client requests). Again, this is a test, not quite usable in production environments -- unless you spice it up maybe.

All I wanted to have a list of URLs (artifacts) that my Nexus fetched during a test. I wanted to check that list, sort it, count the distinct URLs, check for dupes -- if any, etc. This is here just as reference to me in future, or maybe may help somebody else too.

How to do it:

  1. Set up a "clean" Nexus installation, by let's say unzipping the bundle somewhere.

  2. Fire it up, login as "admin" user and set logging to DEBUG level over UI -- Nexus will spit out outgoing HTTP GETs in DEBUG log level like these:
    jvm 1    | 2011-05-20 14:44:58 ... - Invoking HTTP GET method against remote location
  3. Start some client to fetch against nexus, I did this:
    cstamas@marvin test]$ mvn -s settings-1.xml clean install > b1.txt & mvn -s settings-2.xml clean install > b2.txt & mvn -s settings-3.xml clean install > b3.txt &
    ... and went for a coffee.
  4. process the logs.

Processing the logs

  1. Concat the logs into single file -- if needed. I had to, I ended up with two log files, since DEBUG made wrapper to roll the file based on size I guess.
  2. Filter the logs appropriately, I used combination of tools like grep and awk, to produce my list of URLs

Example session:

$ cp ~/worx/sonatype/nexus/nexus/nexus-distributions/nexus-oss-webapp/target/ .

$ unzip

$ cd nexus-oss-webapp-1.9.2-SNAPSHOT/bin/jsw/macosx-universal-32/

$ ./nexus console

$ cd ../../../logs

$ less wrapper.log

$ less wrapper.log.1

$ cat wrapper.log.1 wrapper.log > remoteFetches.txt

$ less remoteFetches.txt

$ cat remoteFetches.txt | grep "Invoking HTTP GET method against remote location" > remoteFetches-filtered.txt

$ less remoteFetches-filtered.txt

$ awk 'BEGIN{FS=" "}{ printf "%s\n", $18}' remoteFetches-filtered.txt > remoteFetches-urls.txt

$ less remoteFetches-urls.txt


It gave me list like this one (unsorted, URLs are ordered as Nexus made them):


Have fun!

Friday, May 06, 2011


Dear visitors and commenters! Please excuse me for my negligence, I simply got no notification from blog that I have comments, that were plainly marked as SPAM!

Sorry again, will look to fix this issue, and prevent valid comments being declared as spam (by blogger engine).

Tuesday, May 03, 2011

Home networking solved (ugg)

This is an interesting story, that was baffling me for two days, affecting even my work-hours presence too, hence, I was eager to solve it.

My provider is hungarian T-Home (a DT subsidiary, just like T-Mobile and all other "magenta" companies are). Since January, we have IPTV set up, which meant a new cable modem too. The old "dumb" cable modem had to be replaced, and I wanted to migrate my home networking infrastructure without any disturbances. I did, and it worked. Up to yesterday, when something happened...

The switch

The day when serviceman appeared to install the units was interesting, so let's start with that. He just brought two boxes, one with modem and one with IPTV Set Top Box (STB). Let's forget the fact he initially brought the wrong STB -- the one without HDD while we ordered the one with HDD, he installed them quickly and fast. The interesting part was when he spotted my infra sitting next to modem while he was switching the old modem with new one (which turned out to not be modem only at all). He put on a blunt smile, and told me, "You have to stop using your own router, it interferes with STB. The new modem has router capabilities too.". I asked "how" does it interfere? He just kept repeating "You have to uninstall your router" and smiling. I believe he had to tell me that due to some company policy (the contract has some stupid limit of machines allowed to connect, but nowadays when even micro ovens has WIFI, those policies may wipe out my... um). Okay, "I will remove everything in a moment you finish" I lied.

So, what he installed looked very promising. Both of the gears wears "Cisco" sticker. The modem (and router, and AP as later turns out) is "Cisco EPC3925 EuroDOCSIS 3.0 2-PORT Voice Gateway" model EPC3925. It features 4 LAN ports, 2 phone ports (I am not using those, SIP phone rulez), and N WIFI AP. The STB is Cisco ISB6030MT.

Both of the "high quality" Cisco gears, not some cheap shit. Yeah. I believed that for few days until I tried to google for them. It's cheap shit with nice stickers on it. Cisco did acquire few companies, and blatantly rebranded them (why are they ruining their own trademark?). I did not care for TV as long as it works and does what we want (it does, even if it runs ancient WinCE!!!), but this was a reason more I did not want to rely on this modem as router. I wanted to use it as least as possible. So, I decided to change network segment for my home stuff. This is what I ended up with:


In short, the modem was set up on IP and I did not want to fiddle with it too much, so I switched my home network to 192.168.1.x. Modem, STB and WRT-H are directly wired (is better to reduce multicast group latency), and WRT-H (H as home) is routing to 192.168.1.x segment, but also does DHCP, DNS for home (and to "fix" the damn Apache Software Foundation SVN server to work with git-svn, but that's another story) and QoS. Wired connections from it goes to Apple TimeCapsule (TC) and Gigaset SIP Phone's base station. And it serves as WiFi AP for home machines like Macs and phones and such. Both WRTs are actually good old Linksys WRT-54GL running the best custom firmware I had chance to find, the Tomato firmware. And the WDS is here just to "hop" over the internet to my office, and to be able to use the printer (it's actually an MFP) from home.

Not wanting to fiddle with modem, all I did (that changes the "factory" preset config as T-Home is shipping them) is shutting down the WiFi on it. Yes, T-Home is shipping them with WiFi on, and my neighbor is full of WiFi noise with meaningless SSIDs (they are randomly generated), and many of my neighbors are simply unaware they have WiFi! Why oh why is T-Home shipping them like this? Why not turning on WiFi on the spot if customer asks for it in the first place?

And everything was working like a charm. Until yesterday.

The drops

The network since change to IPTV and new modem was fairly stable and fast. I did notice some small "drops" (like a browser trying too long to get a page), but they were intermittent and were rare, so I did not fiddle with those.

Yesterday it started to falling apart. My wife was unable to browse anything, my browser, git and svn was timeouting (not connection refused but like TCP packets went to devnull somehow)... It was a nightmare. And the most interesting thing, is that UDP was working without a problem! Initially I thought it's network outage (or brownout) that keeps recurring on provider side, but was suspicious that Skype for example worked without interruption (same for TV reception, that uses UDP mutlicast). So I phoned my provider, asking about outage and describing the problem, but after a long session (they did some remote measurements and other checks), they convinced me it's problem on my side, they had good signal quality readouts, and no packet loss reported (I did confirm the signal quality, since modem does print those out on it's ugly UI). To convince myself even more, I hooked up a Mac directly over the wire STB was using to try the network (to rule out WiFi, any in-the-middle router, etc). It was working like a charm. So, really, it must be my equipment.

Tracing the problem clearly showed that TCP packets are somehow disappearing in my network, and WRT-H was becoming the target of suspicion. But it was reporting no problem, and to make things worse, the "outage" was simply sporadic: in a moment the network was working just fine (the TCP at least, since UDP services had no outage at all), and in next moment, it stopped and packets were lost. Routing table did look okay there, but still, I wanted to check

Mac (actually all BSD kernels I believe) have a nice monitoring tool route -n monitor, and it clearly showed that packets are lost:

got message of size 124 on Tue May  3 11:31:17 2011RTM_LOSING: Kernel Suspects Partitioning: len 124, pid: 0, seq 0, errno 0, ifscope 0, flags:<UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE>locks:  inits: sockaddrs: <DST,GATEWAY>
got message of size 124 on Tue May  3 11:31:32 2011RTM_LOSING: Kernel Suspects Partitioning: len 124, pid: 0, seq 0, errno 0, ifscope 0, flags:<UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE>locks:  inits: sockaddrs: <DST,GATEWAY>

The gateway was WRT-H's IP address, meaning the TCP packet did left Mac, but was lost. There was a LOT of these messages when the problem was present, but in next moment, they stopped and network worked. I was freaking out. I disassembled my network to it's bits to rule out WDS, one router, another router, shut down TimeCapsule but nothing reliable. Btw, try to google for these kernel messages above, NOTHING but nothing really you can discover about them.

So, I googled for hungarian hacker community, knowing I am not alone having this piece of crap of equipment. And what a luck, I did found answer here. Many thanks to Hungarian Unix Portal and people participating in this forum! The guys starting this thread had exactly same symptoms as I had, but using different HW and OSes, he used Ubuntu (I started suspecting at Apple's OSX and who knows what, actually, I was clueless).

The limit

In short, it turned out that crappy wannabe-Cisco modem has a Conn-track connection limit set to 1024! But there is no Admin UI you can find it out or at least read the value! When the connection count is over that threshold, it starts dropping the connections! This is applied to TCP (stateful) connections, hence UDP is unaffected by this. It turns out -- luckily the guy in forum experimented out with his modem -- that modem's "SPI Firewall" is doing this, limits connection count to 1024 when turned on. And guess what the modem default is! I did not apply other fixes he proposed (again, I am not using modem's AP), but shutting down modem's firewall did make it work! Again, many thanks HUP user "ufoka"!

Later, I figured what happened. At home we have two laptops, and two smart phones going out (to the internet, making connections on modem), the printer for example is just "local" connection. But the phones, while did having WiFi set up for home networking, were mostly left on 3G to conserve battery. But when I bumped their firmware to latest Froyo, I started using mine with WiFi constantly on (since battery consumption showed very good and durable). Over the weekend my wife's phone was updated too, and her WiFi got turned on too. And it seems we were already near the 1k connections, and this just made us closer.

Simply, the blunt modem, when the threshold were hit, started silently dropping TCP connections, since it detected as "flood" or whatnot, and this is why hit it. Enabling the phones just made things worse. And this explains the "sporadic nature" of the problem too: the phones does sync here and then, when my pressed Enter in browser she actually created a connection "burst", same for me, etc. Blah.

Long story short: It's solved!