stego-tech 13 hours ago

Excellent critique of the state of observability, especially for us IT folks. We’re often the first - and last, until the bills come - line of defense for observability in orgs lacking a dedicated team. SNMP Traps get us 99% of the way there with anything operating in a standard way, but OTel/Prometheus/New Relic/etc all want to get “in the action” in a sense, and hoover up as much data points as possible.

Which, sure, if you’re willing to pay for it, I’m happy to let you make your life miserable. But I’m still going to be the Marie Kondo of IT and ask if that specific data point brings you joy. Does having per-second interval data points actually improve response times and diagnostics for your internal tooling, or does it just make you feel big and important while checking off a box somewhere?

Observability is a lot like imaging or patching: a necessary process to be sure, but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store when a Honda Accord (self-hosted Grafana + OTel) will do the same job more efficiently for less money?

Honestly regret not picking the Observability’s head at BigCo when I had the chance. What little he showed me (self-hosted Grafana for $90/mo in AWS ECS for the corporate infrastructure of a Fortune 50? With OTel agents consuming 1/3 to 1/2 the resources of New Relic agents? Man, I wish I had jumped down that specific rabbit hole) was amazingly efficient and informative. Observation done right.

  • jsight 12 hours ago

    >Observability is a lot like imaging or patching: a necessary process to be sure, but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store when a Honda Accord (self-hosted Grafana + OTel) will do the same job more efficiently for less money?

    The way that I've seen it play out is something like this:

      1. We should self host something like Grafana and otel.
      2. Oh no, the teams don't want to host individual instances of that, we should centralize it!
        (2b - optional, but common, Random team gets saddled with this job)
      3. Oh no, the centralized team is struggling with scaling issues and the service isn't very reliable. We should outsource it for 10x the cost!
    
    This will happen even if they have a really nice set of deployment infrastructure and patterns that could have allowed them to host observability at the team level. It turns out, most teams really don't need the Escalade, they just need some basic graphs and alerts.

    Self hosting needs to be more common within organizations.

    • baby_souffle 9 hours ago

      Another variant of step 2: some individual with a little bit of political Capital sees something new and shiny and figures out how to be the first project internally to use influx, for example, over Prometheus... And now you have a patchwork of dashboards, each broken in their own unique way...

  • rbanffy 13 hours ago

    > But I’m still going to be the Marie Kondo of IT and ask if that specific data point brings you joy.

    There seems to be a strong "instrument everything" culture that, I think, misses the point. You want simple metrics (machine and service) for everything, but if your service gets an error every million requests or so, it might be overkill to trace every request. And, for the errors, you usually get a nice stack dump telling you where everything went wrong (and giving you a good idea of what was wrong).

    At that point - and only at that point, I'd say it's worth to TEMPORARILY add increased logging and tracing. And yes, it's OK to add those and redeploy TO PRODUCTION.

    • prymitive 12 hours ago

      > There seems to be a strong "instrument everything" culture

      Metrics are the easiest way to simply expose your application internal state and then, as a maintainer of that service, you’re in nirvana. And even if you don’t go that far you’re likely to be an engineer writing code and when it comes time to add some metrics why wouldn’t you add more rather than less, and once you have all of them why not adding all possible labels? And in the meantime your Prometheus server is in a crash loop because it run if of RAM, but that’s not a problem visible to you. Unfortunately there’s a big gap in understanding between a code editor writing instrumentation code and the effect in resource usage on the other end of your observability pipeline.

      • sshine 10 hours ago

        I can only say, I tried to add massive amounts of data points to a fleet of battery systems once; 750 cells per system, 8 metrics per cell, one cell every 20 ms. It became megabits per second, so we only enabled it when engaging the batteries. But the data was worth it, because we could do data modelling on live events in retrospect when we were initially too busy fixing things. Observability is a super power.

        • baby_souffle 9 hours ago

          This right here! Don't be afraid to over instrument. You can always down Rez or even just basic statistical sampling before you actually commit your measurements to Time series database.

          As annoying as that may sound, it's a hell of a lot harder to go back in time to observe that bizarre intermittent issue...

    • mping 13 hours ago

      On paper this looks smart, but when you hit a but that triggers under very specific conditions (weird bugs happen more often as you scale), you are gonna wish you had tracing for that.

      The ideal setup is that you trace as much for some given time frame, if your stack supports compression and tiered storage it becomes cheap er

    • Nextgrid 13 hours ago

      > but do you really need a Cadillac Escalade (New Relic/Datadog/etc) to go to the grocery store

      Depends if your objective is to go to the grocery store or merely showing off going to the grocery store.

      During the ZIRP era there was a financial incentive for everyone to over-engineer things to justify VC funding rounds and appear "cool". Business profitability/cost-efficiency was never a concern (a lot of those business were never viable and their only purpose was to grift VC money and enjoy the "startup founder" lifestyle).

      Now ZIRP is over, but the people who started their career back then are still here and a lot of them still didn't get the memo.

      • stego-tech 13 hours ago

        > During the ZIRP era there was a financial incentive for everyone to over-engineer things to justify VC funding rounds and appear "cool".

        Yep, and what’s worse is that…

        > Now ZIRP is over, but the people who started their career back then are still here and a lot of them still didn't get the memo.

        …folks let go from BigTech are filtering into smaller orgs, and the copy-pasters and “startup lyfers” are bringing this attitude with them. I guess I got lucky enough to start my interest in tech before the dotcom crash, my career just before the 2008 crash, and finished my BigTech tenure just after COVID (and before the likely AI crash), and thus am always weighing the costs versus the benefits and trying to be objective.

        • Nextgrid 11 hours ago

          > folks let go from BigTech are filtering into smaller orgs, and the copy-pasters and “startup lyfers” are bringing this attitude with them

          Problem is, not all of them are even doing this intentionally. A lot actually started their career during that clown show, so for them this is normal and they don't know any other way.

          • stego-tech 10 hours ago

            Yeah, very true, and those of us with more life and career experience (it me) have a societal contract of sorts to teach and lead them out of bad habits or design choices. If we don’t show them a better path forward, they’ll have to suffer to seek it out just like we had to.

denysvitali 14 hours ago

I don't think the comparison is correct. For sure OTEL adds some overhead, but if you're ingesting raw JSON data, then even with the overhead it's probably going to be reduced since internally the system talks OTLP - which is often (always?) encoded with protobuf and most of the time sent over via gRPC.

It's then obviously your receiver end's job to take the incoming data and store it efficiently - grouping it by resource attributes for example (since you probably don't want to store 10 times the same metadata). But especially thanks to the flexibility of adding all the surrounding metadata (rather than just shipping the single log line), you can do magic thinks like routing metrics to different tenants / storage classes or drop them.

Having said that, OTEL is both a joy and an immense pain to work with - but I still love the project (and still hate the fact that every release has breaking changes and 4 different version identifiers).

Btw, one of the biggest win in the otel-collector would be to use the new Protobuf Opaque API as it will most likely save lots of CPU cycles (see https://github.com/open-telemetry/opentelemetry-collector/is...) - PRs are always welcome I guess.

maplemuse 14 hours ago

The part about SNMP made me laugh. I remember integrating SNMP support into an early network security monitoring tool about 25 years ago, and how it seemed clunky at the time. But it's continued to work well, and be supported all these years. It was a standard, but with very broad tool support, so you weren't locked into a particular vendor.

  • blinded 14 hours ago

    smmp-exporter ftw

    • rbanffy 13 hours ago

      And, for a lot of things, it's quite sufficient.

      I used Munin a lot as well in the 2005-2010 timeframe. Still do as a backup (for when Prometheus, Grafana, and Influxdb conspire against me) on my home lab.

      Usually the 15 minute collection interval is just fine. One time though I had an issue with servers that were just fine and, then, crashed and rebooted with no useful metrics collected between the last "I'm fine" and the first "I'm fine again".

      At that point we started collecting metrics (for only those servers) every 5 seconds, and we figured out someone introduced a nasty bug that took a couple weeks of uptime to run out of its own memory and crash everything. It was a fun couple days.

cortesoft 13 hours ago

Setting up a self hosted prometheus and grafana stack is pretty trivial when starting out. I run a Cortex cluster handling metrics for 20,000 servers, and it requires very little maintenance.

Self-hosting metrics at any scale is pretty cost effective.

pojzon 13 hours ago

Difference between OTel and other previous standards is that OTel was created by “modern” engineers that dont care about resource consumption or dont even understand it. Which is funny because thats what the tool is about.

So yea, cost of storage and network traffic is only going to balloon.

There is room for improvements and I can already see new projects that will most likely gain traction in upcoming years.

  • mitjam 5 hours ago

    When SAP swapped their mainframe era born GUI for a html/http based one, our Management was shocked about the tripled network bandwidth and how slow the system felt after the upgrade. At least functionality was on par.

  • growse 12 hours ago

    One of the biggest fallacies I see in this space is people looking at an observability standard like otel and thinking "I must enable all of that".

    You really don't have to.

    Throw away traces. Throw away logs. Sample those metrics. The standard gives you capabilities, it doesn't force you to use them. Tune based on your risk appetite, constraints, and needs.

    My other favourite retort to "look how expensive the observability is" is "have you quantified how expensive not having it is". But I reserve that one for obtuse bean counters :)

    • Spivak 8 hours ago

      The onus is on the one asking to spend the money to demonstrate and quantify the business value and compare it to alternatives. Our field could do with a bit more justifying our purchases with dollar values.

      • growse 2 hours ago

        I agree. But not enough people are costing out the 'do nothing' option (it's not zero!).

ris 12 hours ago

The logging examples given don't appear to be too different to what any structured & annotated logging mechanism would give you. On top of that it's normally encoded with grpc, so that's already one-up on basic json-encoded structured logs.

The main difference I see with otel is the ability to repeatedly aggregate/decimate/discard your data at whatever tier(s) you deem necessary using opentelemetry-collector. The amount of data you end up with is up to you.

hermanradtke 13 hours ago

New Relic, Datadog, etc are selling their original offering but now with otel marketing.

I encourage the author to read the honeycomb blog and try to grok what makes otel different. If I had to sum it up in two points:

- wide rows with high cardinality

- sampling

ramon156 12 hours ago

All I want is spanned logs in JS. Why do I need OTEL? Why can't pino do this for me?

sheerun 10 hours ago

Trapped in his time I see

invalidname 7 hours ago

Cloud companies will charge you if you use their features without limits: news at 11...

Observability solutions of this type are for the big companies that can typically afford the bill shock. These companies can also afford the routine auditing process that makes sure the level of observability is sufficient and inexpensive. Smaller companies can just log into a few servers or a dashboard to get a sense of what's going on. They don't need something at this scale.

Pretty much every solution listed lets you fine tune the level of data you observe/retain to a very deep level. I'm personally more versed in OneAgent than OTel and you can control everything to a very fine level of data ingestion.

dboreham 13 hours ago

Uhhh. The point of OTel is that you can host it yourself. And should do imho unless you're part of a VC money laundering scheme where they want to puff up NR or DD or whoever portfolio company numbers.

  • jsight 12 hours ago

    In my experience, the people willing to pay the most to not host it themselves are often the big companies that are long past VC money.

    They'll gladly pay someone to do it and have a big team of engineers and planners to support the outsourcing.

    Efficiency isn't what bigco inc is about.

    • xyzzy123 11 hours ago

      BigCos have seen teams come and go, whole departments slaughtered by reorgs. They have seen weird policy changes, political battles, personal beefs and bad managers that trigger waves of attrition.

      They know that even if you have the capacity to run something internally today, that is a delicate state of affairs that could easily change tomorrow.

  • rbanffy 13 hours ago

    You should always think about how much it'll cost for you to roll out and maintain something vs how much it would cost to buy the service from a vendor.

    Chances are your volumes are low enough it will be actually cheaper to run with something like New Relic or Datadog. When the monthly bill starts reaching 10% of what a dedicated person would cost, it's time to plan your move to self-hosted.

    • mdaniel 10 hours ago

      > it's time to plan your move to self-hosted.

      No, it's always time to plan the move to self hosted, and just occasionally choose someone else to be the "self." Because once a proprietary vendor gets in the stack, evicting them is going to be a project

      I'm aware that this doesn't split cleanly down the "saas only feature" or the evil "rug pull" axes, but I'd much rather say "I legitimately tried to allow us to eject from the walled garden and the world changed" versus "whaddya mean non-Datadog?"