C - How is it possible to ensure that code doesn’t just crash due to a uncaught type error and pointers pointing randomly to memory

@bwat that was very interesting to read! It also jogged a memory of a job for Glasgow Underground where we had to replace a few KM of cables with fibre optic cable because the electrical discharge from the tracks was so bad it kept frying the UARTS and this was back when it was fairly new IIRC.

Also @grossdan it reminded of another “tactic” we used for defensive programming; a lot of the “solutions” were single board computers, quite often the code was hand written assembler, commonly 8051 or 6809. We used to instruct the linker to fill all unused memory with a byte, let’s call it “0xCF” for example, and this was a single byte interrupt instruction which always jumped to a know place in low memory. From there we dumped the registers and stack into some NVRAM and killed the fuse.
A separate diagnostic tool could plug into the D25 (!) on the board and read back the data to analyse what went wrong.

interesting …

do FPGAs exhibit such issues these days …

absolutely no idea!

What issues?

As an interesting aside, Prolog has always been popular with prototype ASIC/FPGA dev tools. Hardware description languages and Prolog aren’t a million miles apart. I found Verilog fairly easy.

Unreliability of hardware due to external (electric) forces.

Could you point me to some papers or reference for that. This sounds very interesting.

what is it that makes them close ?

Dan

It’s on a motherboard so the long term average bit error rate is 10^-15 regardless of logic implementation technology. Add a network and you’ve added a very weak link (fiber optics: 10^-9, twisted pair: 10^-6, wireless: 10^-4).

Add power spikes on input lines from a Glaswegian underground train and anything blows!

https://pdfs.semanticscholar.org/9ee6/3f8ee045613fae478d17694b82157c711a62.pdf

  1. Testing
  2. Assertions
  3. Linting (and other tools), including all Compiler Warnings On
  4. Sane development practices (including unit testing) with skilled co-workers (fresh engineers out of uni, only in diluted concentrations)
  5. Defensive programming (See all of “Code Complete”, the book)
  6. Adhere to (a subset of) MISRA C coding guidelines or similar for high-assurance projects (the fact that “MISRA C coding guidelines” even exist means that the C ecosystem is insane in and by itself, but " “You have to make the good out of the bad because that is all you have got to make it out of.” as Adam Penn Warren wrote.
  7. Most importanly Do NOT program - Use well-tested, well-used, and best open-source libraries instead of rolling your own, so you just have to glue modules.

Readable: “C Is Not a Low-level Language — Your computer is not a fast PDP-11.”
https://queue.acm.org/detail.cfm?id=3212479

Not trying to be funny but some of the most influential systems ever built violated 4 through to 7. Generalising to other languages those were violated when Prolog was first invented, also men were put on the moon violating a good number of them too.

Edit: I just want to add that the vast majority of problems we see as programmers are not technical problems but thinking problems. If you can get the right people on your team, then you can work wonders.

1 Like

https://tams.informatik.uni-hamburg.de/vhdl/doc/faq/FAQ3.html#sstuff_parser
The links seem to have rotted, however.
There’s also this: A set of tools for VHDL design (by Peter Reintjes)

1 Like

Interesting paper. The prolog system on a 3 MIP workstation synthesized a 8,500 gate design in 37 seconds, and a 15K gate design in 60 seconds. According to Hennesay & Patterson (Computer Architecture A Quantitative Approach, 1st edition) a 32-bit CPU could be put in a single package when gate density reached 25K-50K gates. Assuming the variation is down to on-chip cache or not. Maybe also RISC chips lacking on-chip microcode memory. Anyway, extrapolating the Prolog system timings (like a true amateur) that would take just over two minutes. That seems fast to me. My 32-bit FPGA CPU designs are synthesized in about that time on modern desktops/laptops using Quartus. Hard to compare FPGA and CMOS implementations but still sounds good!

We are now veering into big engineering projects, and yes, there will always be Indiana Jones type of explorative engineering under pressure (war, big government projects, single-brain-idea implementations on greengrass idea space without many connections to other projects or constraints, completely-bereft-of-economic-justification glorious enterprises … of which only the successful ones are remembered, naturally … ) – but sustainable it is not (“back to the moon” in particular has been coming for some time now). High-assurance software is just not going to be created without 4, 5, 6, 7. I had the requirement documents for Galileo Space Software on CD … it nearly killed me.

I spent a decade in five nines telecom infrastructure development without 5, 6, and 7. At one point 50% of the world’s telephone users were using our OS. This company designed pretty much everything themselves from screws to CPUs. Oh and they did some Prolog work as well.

A coule of weeks ago I recorded a webinar on the C language
and MISRA C:

https://www.youtube.com/watch?v=Bv4emdGVAuk&feature=youtu.be

Kind regards,

Roberto
3 Likes

Wow, this is great.

thank you,

Dan

I hope things were written in Erlang.

And, and I should have written this pointlist differently and not to be contrarian, but, really,

  • No defensive programming?
  • No restriction on C usage?
  • Greenfield development?

Let me just say that I doubt the propositions as listed.

From the archives … when Ericsson Radio AB decided that it couldn’t do Software at all and decided to go bad on Erlang. Someone new must have had been hired to the mahagony suite:

Not in my group, we were C and assembly.

Not to the point it was given a name. We we’re writing normal poduction C code. There was no ‘cult of X’ going on.

Yes but not MISRA. Rules are very interesting. I’ve done some MISRA for a subsupplier to a large automative supplier who specified MISRA. The rules followed in telecom infrastructur development (which is of national strategic importance) isn nowhere near MISRA.

The very fact they developed Erlang should clue you in to the sort of things going on in Ericsson back in the day. They honestly produced everything from screws to CPUs.

You’re free to do so. I’m just another bloke on the internet.

Ohhhh I worked with SoftLab (a spinoff from Linköping’s University) with Plex compiler output.

As for Erlang it wasn’t huge within Ericsson but they did use it. I remember there was a group down the street from where I was writing C code using it and I had kinda thought about seeing if I could make the move. These days there’s a few Erlang programmers working at Klarna working with systems for online payment.

Edit: Logic Programing and telecom papers: https://www.sciencedirect.com/science/article/pii/0743106690900549
CiteSeerX

Strncpy has its own security issues (it won’t NUL-terminate if the destination is shorter than the source string). I think you want strncpy_s, but it’s been a long time since I’ve programmed in C. snprintf is another option. Or just use C++ strings (and enjoy the world of C++ references and move semantics).

I spent 2 years at another telecom infrastructure development company (I left because the winters in Ottawa are too cold, combined with a nice job offer from a friend in Vancouver) which also dabbled in Prolog. We used Protel for development, which was essentially an extended Pascal (when I look at Google’s “Go” language, I seem some significant similarities); buffer overflows were impossible (the compiler removed range checks when possible; but we estimated there was still about 10% overhead in range checking, which we considered acceptable in a 5-9s environment). We were most amused to see another company using C and an OS that apparently was based on Unix. Our operating system had the same philosophy as Erlang - message-passing as the basic IPC, and for errors: log the problem, let the process crash, and a parent process will deal with things (there was even a “commitsuicide()” that was used for “this should not be happening” situations). We didn’t use @dtonhofer’s list but instead depended heavily on smart/skilled programmers, design and code reviews, unit tests, and intensive system/integration level testing.

1 Like

Yes, you still have to have enough room for your destination to include the terminating NUL. But as opposed to using C++ why not use a proper high-level language. I like C but only for low-level stuff. C++ is still too low-level I reckon. I see C++ used a lot where something like Python would be entirely acceptable technically and socially.

Yep, messaging passing systems can be found in telecom dev at all abstraction levels. Messaging passing gives you communication and a fair amount of synchronisation, well serialisation, for free. I like it a lot. I’m implementing a Parlog style language right now to get that message passing with Prolog style syntax (Parlog had a Prolog ‘all solutions’ call so there was the possibility for more than just guarded Horn clauses don’t care nondeterminism).

To add some oil to the discussion …

What are your thoughts about Ada as a C replacement …

Ada (from what I recall using it many years ago) has a very strong type system, its object based (more like Go than C++) and has a language level support for actor/message synchronicity.

And, there is at least one vendor I noticed: https://www.adacore.com/

Dan

1 Like