Could a Bug be Deliberately Coded into an Open Source Project for Financial Gain?

For some bizarre reason, the thought at the top of my head last night at bedtime was… “I wonder if sometimes… open source developers deliberately code bugs or withhold fixes for financial gain?”

If you don’t follow what I mean, here’s where I was: often times, large corporations or benefactors will offer a code fix bounty or developmental funding for an open source project they have come to rely upon.  What if an open source developer were to deliberately code a bug into an open source project or withhold a fix so they might extract some financial support with this method?

I brought it up in #morphix to Gandalfar, one of my trusted open source advisors.  We debated it shortly and he brought up several good points.  While this may happen, the scheme is likely to fall apart quickly.  The community is the resolver of situations like this.  If the community finds a bug and offers a fix for the problem, then the developer will find themselves in a political combat situation.  They would likely try to stifle the fix with some ridiculous excuses and/or start to censor discussion of the subject over mailing lists or on forums.  Speculation could be raised about the issue and ultimately, people could start to fork the project elsewhere, unless the license of the project disallows that.  In the long run, the community would resolve the situation by simply offering a new solution.

So while it could theoretically be achieved for short-term gain, in the long run the community makes the approach unsustainable.

Why do I bring this up?  Well, I think we all know that closed source entities often engage in this practice.  I could point out several examples that I have absolute knowledge of this happening, but I don’t think I have to.  I’m not completely absolving open source from this either – look at what “official distributions” do in some situations… Red Hat Enterprise Linux or Novell (SUSE) for example.  But in those situations, if you didn’t want to pay to upgrade the operating system and still resolve your situation, we all know that with the right application of effort and skill you could overcome it.

All in all, this whole thought process ends up with a positive note about open source.  If it’s broken, you can fix it yourself or work with others to make it happen.  The community – that incredibly large, global groupthink – keeps it all honest.

Or, you can put all your money and eggs into a closed source basket and find out you’re getting screwed when it’s too late.

It’s all about choice, right?

Reblog this post [with Zemanta]

Industy buzzword that needs to die: the “experience”

RENTON, WA - MAY 4: (FILES)  Microsoft product...
Image by Getty Images via Daylife

One of the industry buzzwords that needs to go to the grave is the user “experience.”

Don’t quote me here, but I recall this buzzword being developed by Microsoft as part of the marketing campaign behind Windows XP.  XP was supposed to be “experience” or “expert” or “Xtra stuPid marketing,” I’m not sure.  Don’t get me wrong, I’m not an XP hater.  But I’m definitely a hater of the term “experience.”

The user interface is just that, an interface.  The term “experience” has its roots in the passive voice.  Somewhere in there you’ll find that the user “experience” for computing generally sucks.  It’s just a nicer way of saying “our user interface sucks, but it provides an experience.”

Whatever.  Drop it.  Look, other computing companies are just as guilty on this one (I’m looking at you RIM and Apple), but the fact is – everything great on the Internet is ruined by salespeople and marketers, period.

This is just one example of many.

Reblog this post [with Zemanta]

Scaling Guidance

Image representing Microsoft as depicted in Cr...
Image via CrunchBase

At WWDC 2009, I stood up in a session on Snow Leopard server and lightly rattled Apple‘s cage about its poor scaling guidance for the product. They were spending a great deal of time talking about the benefits of Wiki Server 2, but there was little to take away from the session on what to tell any prospective customers regarding cost.

I think in the Apple world, there’s this unspoken rule of “whatever’s good enough” to suffice for an environment. That’s fine, but what I’d really like to be able to do is recommend a solution that will do the job.

In the past, Microsoft was very good about providing specific scaling guidance. Around the Dark Ages (which I define as the day SQL Server 2005 came out and every product hence), Microsoft pulled back on several things. Most notably, they stopped providing specific scaling guidance on products and very clear documentation.

The documentation that was offered with each product varied and was mostly vague and notional. Once my Microsoft TAM asked me as part of some kind of survey if I thought that blog content provided by Microsoft developers and whatnot could be considered as official documentation. It seems like some folks within the organization believe this to be so, but Goddess forbid you end up in the India support organization… where only the script in front of them is official documentation.

Back to Apple. They provide documentation and it’s usually quite good. They provide fairly extensive documentation. Sometimes it’s wrong, but it’s still documentation. It’s something to point to and say, “See here, it says this should function this way.” That’s a nice level of comfort. What they do NOT provide is scaling guidance. They will never ever tell you, “It will take x amount of Snow Leopard servers to run Wiki Server 2 for 10,000 users doing y amount of data.” Why not? I suspect it’s because they don’t know.

As Apple becomes more and more relevant in the enterprise, this has to change. If I’m going to propose an Apple-based solution to any of my prospective customers, I have to have something to work from. It cannot be “we’ll just throw some servers in there, watch it, then buy more.” Most customers… at least the government for sure… do not like that approach.

Ironically, I started this post as a complaint against Microsoft for providing not just vague information on this… but a box full of vague smoked up with an opaque fog on the glass.

That sucks, guys. Stop contributing to the failures of the IT industry and fix it.

I’m looking at you too, Apple.

Reblog this post [with Zemanta]

Wallpaper Outrage

Image representing Microsoft as depicted in Cr...
Image via CrunchBase

Paul Thurrott posted a nice attaboy to the MSN folks today for releasing a wallpaper product that will check Microsoft for updates to your operating system.

Get ready folks, I’m about to show my ass again.

Are you KIDDING ME?  Paul Thurrott has obviously never had to manage a network beyond his own house.  Microsoft commonly releases updates through Windows Update and if you’re a Windows admin worth your salt, you’ll know that it’s wise to wait on many of these updates until you’re sure they’re not going to fry your systems.  Indeed, many enterprises flat out block Windows Update and only deploy them when they’re ready to support any mishaps.

Any of you who think Microsoft cannot commit mishaps in an operating system update is just a fracking idiot.  Period.  Don’t even bother talking to me.

So now these MSN goons have released software that lets you bypass your enterprise security measures on Windows Update?  THANKS ASSHOLES.  If you’re running an enterprise network, please take a look at this software package that your users are RUSHING OUT THERE TO DOWNLOAD before they gank up your network.

This is basically WALLPAPER that can update your OS?  Great.  Another background app.  Another systray app.  Another useless waste of time and resources from a company that should be spending its time fixing Exchange 2007 instead of releasing useless garbage that grants enterprise users free license to bypass their IT department.

THANKS AGAIN, MICROSOFT!

You have so jumped the shark.

Reblog this post [with Zemanta]

What I Want from Data Center Management Software

(note: the following is a stream of consciousness post regarding some software requirements as i dream them up.  if you are a developer and actually take up these requirements as the design for a software project, please let me know.  if you are aware of a software product that accomplishes all of this, please do not bother to let me know about it.  i don’t care.  fact is, nothing on the market today does this well enough to make me care about it the way i want to care about it.)

Let’s face it.  Documentation sucks.

I’ve traveled around this country and seen many an IT environment.  All of them have one thing in common: the documentation about the environment sucks.  It’s in such a sad state that should anything happen to anything, nothing would be recoverable.

We’re guilty of it in our own environment.  I’m not going to sit here and disparage everyone else’s IT environment without realizing that it’s a problem where I work too.  I’ve spent a lot of time wondering why this documentation is in such a sad state and come to a few conclusions.  I suspect these conclusions aren’t a surprise to anyone.

  • The staff is overworked.  They have no time to sit in your meetings, listen to the managers and customers rant and rave about how nothing works right (funny how that not-listening thing travels both ways), or get all of their assigned work done to begin with.
  • Documentation is boring.  There is nothing glamorous about writing a Word document about how you configured a paging file.
  • After writing the documentation, maintaining it is a real bear, especially in an age when the corporation that owns 90% of your data center farts a new patch daily.  What?  Tuesdays?  Oh man, that’s just for OS patches.  Try running some enterprise software sometime.  (NOTE TO SELF: Bitch about Exchange 2007 more, because that obviously hasn’t sunk in yet).
  • Too many fires to put out.  Remember that not-listening-is-a-two-way-street crack?  Yeah well, since management didn’t listen about your needs, you’re working 70 hours this week to fix all the crap that broke.  Oh yeah, don’t forget to document what you did to fix it.  (Now it’s 90 hours).

I could go on, but I think you get the idea.

So, now that I’ve listed reasons why you do not have the documentation, let’s talk about what happens when you do not sit near the data center and have questions about what’s what out there.

  • Need to find out what port a server is hooked up to?  Scan through your endless amounts of PSTs on the file share (haha!) to discover what port was assigned two years ago.  Fail.  Look for the document.  Oh!  Wait.  Fail that too.  No docs.  Ask someone who is sitting in the data center.  How the hell should they know?  They’re busy and don’t have time to help you.  Oh, by the way, that cable isn’t labeled anyway.  Look it up in the docs, dumbass.  Yeah, what docs?  Time to get in the car and drive over to look for yourself, cursing all the way that you have no documentation.
  • Need switch zoning information for that fabric?  See above.  At least you can login to the switch remotely… until Java fails.  Drive over.
  • Time to build a server.  Time to put it into production.  What do you mean it’s got a bug we fixed two years ago?  Oh, shit.  We forgot that NoServerTimeZoneComp registry key.  It’s always the Mac users that make your Windows admin lives hell, right?  No, buddy, it’s because you didn’t follow the documentation.  Uhh, what documentation?

I think I’ve stated my case.  Now then.  I want software that can overcome the burden of writing this documentation and I want it available in damn near real time.  So, here goes.

I want data center management software that:

  • …is object-oriented like C.  I want to be able to instantiate a new instance of a Dell 2950 and define its properties – like what rack it’ll be in and what U numbers it occupies.
  • …can perform discovery on that new Dell 2950 and figure out the rest of the properties for the object (a la service tag number, CPU, RAM, maintenance left on contract, etc.)
  • …can allow me to connect the network to a specific switch port by dragging and dropping a line like Visio.
  • …can allow me to connect it to a storage area network like the network connection above.
  • …can produce a 3-dimensional rack drawing (the rack itself should be just another object, since we’re object-oriented and the server objects are just properties) that details every network connection, fibre hookup and power connection.
  • …can, upon sensing a failure from SCOM 2007 or NetIQ, label each server and cable that has failed to look for common properties in an anomaly (because it’s always the network’s fault).
  • …is able to produce a server installation document by right-clicking on it and selecting “current state documentation.”  I want it in PDF format so I don’t have to open fracking Microsoft Word ever, ever, ever again.  I want it to be able to spot every piece of software that is loaded on the server.  I want it to be able to tell me every patch and registry tweak that has been applied to that server since I racked it and installed the operating system.
  • …is able to alert me when servers are about to run out of maintenance.
  • …is visual enough that the customers can use a dashboard of sorts to view some of the same properties and elements that I need to see.

I think you see the challenge here.

Now I ask you…

…why doesn’t this software exist?

Reblog this post [with Zemanta]

Where Powershell Fails

I’m all about negativity today. Sorry.

Anyway, I’ve had something nagging at me for a while now and I think I’ve just figured it out. Powershell is Microsoft‘s answer to having a dumb command line through the Win95 – Win2003 years and it’s quite powerful, as the name implies. Microsoft likes it so much that they makes most of the Exchange 2007 administration efforts in the Exchange Management Shell, a derivative of Powershell that contains Exchange-specific cmdlets.

I’ve long bemoaned to our internal support personnel… and… well, probably my Microsoft contacts too… about how discombobulated Powershell actually is. It’s like it was designed with no standard in mind for the commands – each developer wrote their own cmdlet with their own switches and methods to do things the way they saw fit.

But it’s actually worse than that. Now I’ve come to realize that the problem with managing Exchange from the shell is not only because of the lack of standardization, but because a great deal of this SHOULDN’T be done in a shell command. I’ve heard that Powershell was designed to attract Linux admins who prefer the command line and that’s fine. But I do not know of a Linux admin who would type a command to set a disclaimer on the entire Exchange organization, but rather he/she would edit a config file of some kind. That way, not only would the disclaimer setting be readily apparent and visible, but it wouldn’t take some obscure command to be executed to show me the meat of the option.

What tripped this realization was this “power tip” when I just went into the Exchange shell on one of our servers:

Tip of the day #58:

Do you want to add a disclaimer to all outbound e-mail messages? Type:

$Condition = Get-TransportRulePredicate FromScope
$Condition.Scope = "InOrganization"
$Condition2 = Get-TransportRulePredicate SentToScope
$Condition2.Scope = "NotInOrganization"
$Action = Get-TransportRuleAction ApplyDisclaimer
$Action.Text = "Sample disclaimer text"
New-TransportRule -Name "Sample disclaimer" -Condition @($Condition, $Condition2) -Action @($Action)

Why am I not looking in a config file for this information? Fail.

Reblog this post [with Zemanta]

When RUS Strikes

One item you’ve probably learned by now if you’re an Exchange admin working on a 2007 deployment is that Microsoft has changed the behavior of the recipient update policy.  Most of you won’t care about this and that’s just fine.  You shouldn’t.  I would dare say that if your Exchange environment is engineered well and planned out the way Microsoft probably expects it to be, you should have almost no issues whatsoever.

Consider, however, if you’ve deployed Exchange with some type of “non-standard” approach.  Yes, please picture air quotes around that.  We’re trying to be politically correct here.  What if your Exchange deployment wasn’t, for instance, master of all mail within your TLD?

Let’s say you have a TLD of contoso.com.  Now let’s say you set up an Exchange service forest called services.contoso.com (see my earlier post about why an Exchange service forest is a Bad Idea).  Now let’s say that because there are many other businesses and entities within contoso.com that route their own mail, the decision is made that Exchange cannot be authoritative for all mail coming in to contoso.com.  You need to forward it up to some traffic directors at the top level to determine where the traffic goes.  Now you have Exchange installed in a service forest and you’re not authoritative for contoso.com.  So let’s say you decide to become authoritative for mail.contoso.com.

Now your recipient policy probably says that when new users are created, give them a service.contoso.com and a mail.contoso.com SMTP address.  What about the contoso.com address?  Well, since you’re handling that elsewhere, a third party process has to come in and manually assign that address.  Fine.

Now in 2003, once the user object is created and the addresses are stamped, RUS will never touch the object again and muck with it unless you forcibly tell it to do so.  Believe me though, it’s rare in this setup that you’ll be running this manually.

When you begin to roll out Exchange 2007, you get a new issue.  If you’re configured in this manner and make any changes to the user object… say… moving a mailbox or anything of that nature… then you’ll notice that RUS will take your user object and mangle it up according to what it thinks the SMTP addresses should be.  It’ll reset the primary address.  Fun.  Now your users start to complain that their mailing list memberships are failing, their business cards are incorrect, yadda yadda.  Yes, the behavior of RUS changed in 2007 from 2003.  Take note of it, because if you’re set up in a wonky way that prevents you from being authoritative in your domain, this is going to bite you once for every user you have.

Reblog this post [with Zemanta]

“Chrome” set to reignite old tensions

Continuing my recent tradition of expressing what are likely to be fairly unpopular opinions with my peers, tonight I’m going to rag on Google‘s “Chrome” project and tell you why this is a Bad Idea ™.  I’ll try to keep this short (update: I failed).  This is considered to be a discussion starter, not a final statement.  I’ll probably elaborate on these discussion points on the next NO CARRIER, so be sure and give me some feedback here.

Key points:

The Browser War is Pointless

Anyone who still thinks the browser war is anything worth fighting is absolutely delusional.  The whole point of having a web browser is to serve as an open portal to content, not to give your company the biggest tool at the urinal.  The web was created for serving content regardless of what application you used to view that content.  In that spirit, what’s the point of fighting over this?

I understand the key differences between browsers and that some browsers have perceived advantages over others.  I understand that all too well.  One of the things you used to give up when you made a conscious decision to be a Mac or Linux user was the fact that the de facto browser on the net that had no intention whatsoever of conforming to a standard is no longer in your pocket.  Being a Mac or Linux user means you have more than one browser installed and you use the right tool for the job.  The fact is, the right tool for the job shouldn’t matter because HTML…er, XHTML or whatever it is this week is a standard, right?

Companies do not live or die based on whether or not you use their browser.  Well, unless you’re Opera, maybe.  But I digress.

We all know Microsoft is starting to wake up to this fact and has indeed promised to help further this idea.  That’s great.  It only bolsters my argument then.  It used to be that the browser war was about dominating in your interpretation of the standard.  Now that’s less and less important because standards are being followed (well, in general).  So… why bother?  What does it do for Google to compete in this browser market?

I know the answer to this and so do you.  We’ll talk about that later.  But for now, just believe me.  This market share thing is pointless.  I felt the same way when Steve Jobs declared war on IE with Safari on Windows.  That just upset me.  All that does is tie a huge steel ball around Apple’s ankle and toss it in the ocean.  Apply that to Google now too.

Moving on.

Browsers are “planet” apps

Browsers are becoming “planet” applications with lots of satellites (plugins).  For example, I use MobileMe which hooks into Safari or IE for bookmark synchronization… but not Firefox!  Many people I know and love prefer Firefox because of the various plugins that “better” their browser.

The point I’m trying to make here is that the browser is not a monolithic application.  You spend time adding whipped topping and chocolate shavings on top to get it just the way you want to work with it.  You’ve now installed satellite applications that better your experience for you.

Now along comes a new browser with no support for those satellites.  You have a new planet that will support no moon.  Are you going to pack up your cheese and move to it?  What happens when Chrome doesn’t support your favorite plugins?  Okie, fine.  I know they have said they plan to support Firefox plugins.  But will MobileMe bookmark sync work?  Probably not.  That’s so crucial for me that it’s a deal killer.

As a matter of fact, there’s a good solution to this – and it would help out everyone’s favorite argument: security.  Don’t support these plugins.  Just be monolithic and require extra functionality to be external to your application.  That would change the game entirely… for the better.

A New Security Nightmare

The story you didn’t read the other day was how enterprise administrators everywhere were groaning about the release of Chrome.  While they salivated about using it at home perhaps, what’s happening in the workplace is a whole nutha story.

Google woke up and unleashed Chrome on the world this week and millions of people downloaded it.  I’ll bet a great deal of those people were at work when they did it.  I bet they installed it on their work PC’s.

So.  You’ve just taken a brand new application with no record of security (and let’s face it, Google’s security record is not clean)… an application that is now your portal to the most insecure and infested part of the Internet and added it to your company’s PC.  You’ve just made your PC a tremendous liability and your enterprise administrator is likely ready to kick your ass.

The web is the most dangerous place on the net.  Everywhere you look it’s teeming with viruses, javascript exploits, cross-site scripting bugs and other nasties.  The web browser is the simplest and quickest way into your PC.  So let me get this straight.  You just installed that thing on your nice and secure corporate PC?

“Well, it’s not Internet Explorer, so I’m good!” you might say.  Nice argument.  Nevermind the fact that a large percentage of web exploits occur in Javascript itself.  Guess what Chrome’s focus is?  Making Javascript a “better experience” for the web browsing public.  Did you just get a shiver?  If not, you’re not paying attention.

Indeed, within hours of release, Chrome was proven to be subject to a carpet bombing flaw.  Look it up if you don’t know what that is.  I’m too fired up to bother linking it 😉

A Cloud OS Should be Standards Based

Now we get to the strategic part of the discussion.  This is where Google’s motive comes in.  They’ve been building the “cloud OS” so to speak for years now.  They envision a world where you can sign in with a single username and password from anywhere and use applications just as you would your desktop, complete with the data you work with.  Chrome is their method of furthering that agenda.

That’s great, except that the cloud as a business data model hasn’t really shaken out to be a good idea.

I still do not know of any large enterprise business willing to put their data up on the public web.  Better yet, I do not know of any large enterprise willing to compromise on SLA’s for their critical data.  They’d better start thinking about that if they plan on moving to the “cloud.”  The “cloud” has already shit itself more than once.  Google, Amazon, Apple and all other types of cloud computing folks have had severe troubles recently.  It’s an unproven model and with the way you hear people talk about it like it’s the second coming… you’ve got another dotBomb shaping up here.

Chrome is supposed to make Google’s cloud computing experience better, since Javascript was their focus and Javascript is their operating system.  Neat.  I’d suggest you stay off of other sites, since their new interpretation of Javascript and the Java VM could leave you open to all sorts of other vulnerabilities (see: security).  How about you make sure that business model is intact before you put too much time and money into it?

Open Source – Who Cares?

A lot is being ballyhooed about the fact that Chrome is open source.  Hooray!  Why is that a win, exactly?  Because you can send patches to Google?  Think they’re going to include your code in their release when they have a fairly clear agenda?

Red herring, folks.  They could give a shit about your code.  They just wanted something else on the PR.  Honestly, what does it buy them to be open source for this project?

It sure bought them an interesting blog post (see: security) about how everything you type is sent back to the Google mothership, including sites you visit.  Shivering yet?  Woo, aren’t you glad you installed that on your CORPORATE PC!?!?

And Finally…

Just in case you’re still wondering what the purpose might be of the Chrome browser and why you’re using it… 

Google’s business model is advertising.

Think about it, H.I.

Reblog this post [with Zemanta]

How a resource forest can make you cry

Typically Active Directory is managed using th...Image via Wikipedia

This post is focused on those of you who have decided to deploy Exchange in a resource forest.  You’re in for tears.  While the resource forest is technically a supported deployment method for Exchange, I’m going to point out what can go wrong in your Exchange world that will keep your admins up at night.

Let’s start with the definition of a resource forest, just in case you’re not sure.  The resource forest approach means that you have one Active Directory forest where your user accounts live and another Active Directory forest where your application (Exchange, in this case) lives.  You have user accounts in the resource forest that are disabled and then externally associated with the users in the user forest.  This of course, requires a trust between the two forests, which you likely have anyway, right?  Right.

A disabled user in the resource forest means the attribute msExchMasterAccountSID is empty.  This value is required for Exchange to identify and resolve the user account when permissions are calculated against a mailbox or folder in a mailbox; for instance, in a delegation scenario.  If your user accounts and Exchange live in the same forest, then this is set to “SELF” in Active Directory Users and Computers/Exchange Advanced/Mailbox Permissions.  This will write the SID of the user account into the msExchMasterAccountSID attribute and then be used to identify the user.  This also means that the forest is able to “track” the operations of this account.  If the account is disabled or deleted, when ACL‘s are calculated against the msExchMasterAccountSID value, everything is hunky dory and happy.

When you have a resource forest setup and you externally associate a user from the user forest to a disabled user in the resource forest, what you’re really doing is writing the SID from the user object in the user forest to the msExchMasterAccountSID.  Now, that’s the SID that will be stamped on a folder or object that gets ACL’d with your permissions… keep in mind, this is the SID from the user forest.

Now when Exchange needs to calculate the permissions, it will run across that SID and go talk to a domain controller to resolve it.  The DC will refer to the trusted user forest DC for resolution, but it proxies this communication over to the trusted DC, then returns with the answer.  This traffic pattern can be headache-inducing all to itself, but that’s a topic for another day.

So now here’s the problem.  Because these SIDs are external to the forest, it has no way of realizing if the SID is valid or not.  In other words, if you whack a user account in the user forest, the resource forest has no way of being notified of that SID’s destruction.  You now have what I call “SID ghosting.”  I’m sure there’s a term for it, but that’s the term I use around here.

Let’s look at an example.

Mary D. is a manager.  She has an administrative assistant, Ken G.  She assigns delegate permissions to Ken G. so he can manage the calendar.  What she has really done is stamp Ken G.’s SID from the user forest on her calendar as a permission object.  If you were to look at her calendar with pfdavadmin and check the permissions, you would see Ken’s access expressed as USERFOREST\KenG, not RESOURCEFOREST\KenG.  This is because the SID value from Ken’s account in the user forest is stamped in his msExchMasterAccountSID attribute in the resource forest.

Now let’s pretend Ken G. was looking at pr0n one day and got busted.  He’s terminated at the user forest and his account is deleted.  Now the resource forest still has his account and the ACL still exists on the calendar.  To preserve Ken G.’s data, his account in the resource forest is not deleted, but let’s say they shut down mail delivery by setting his mailbox quota to 0 or something.

What you have now is thus: every time Mary gets a meeting invitation, she will get an automatic bounce from Ken G.’s mailbox.

From a usability perspective, this sounds crazy.  If it’s happening to a top end manager (which, let’s face it, is where this will usually happen), they’re likely to go berserk and demand that you fix it right away.  When you research it, you find out that Ken G.’s SID is still stamped on Mary’s calendar.  This is because the resource forest has no way of knowing that the user object in the user forest was whacked and the mail delivery is now failing due to the disabled mailbox in the resource forest.

Let’s make it worse.  What is Ken G. had an assistant?  What if that assistant had another assistant?  What if your users created a delegation chain about twenty people deep?  Well, then what might happen is Mary would get a meeting invitation and then she’d get a bounce from someone way down in the chain, perhaps someone she doesn’t even know!  That one is really hair raising.

How do you debug this?  Well, so far that we’ve determined, the best you can do is open up pfdavadmin and figure out who delegated rights to whom and follow the breadcrumb trail.  If your users overuse delegation, this can be a painful exercise.  They should not be adding more than 4 delegates to their mailbox under any circumstances, but that’s a talk for another day.  Anything more than 4 delegates and they probably only need sharing permissions anyway, so consider using that instead.

If you’re really paying attention, apply all of this knowledge to Sharepoint.  Try setting permissions to your trusted user objects in the user forest.

Now think of all this (Sharepoint included) and think of the day that management decides that this just isn’t working – you need to get all applications and user objects into the same forest.  Did your brain just explode?  If not, you’re not paying attention.  Key words are SID and msExchMasterAccountSID and Sharepoint permissions.

Run.  Run screaming from the resource forest.  Friends don’t let friends set this up.

Really.

Reblog this post [with Zemanta]