I finally taught (demonstrated) a lab for the first time. Yay! These first-year students are soooo cute! We were doing basic logic gates, working up to a full adder. Their confusion over DeMorgan's law was priceless!
For those who haven't done boolean algebra (which means that you haven't done engineering, philosophy, computer science, math, or statistics), this law states that:
not( X and Y ) = (not(X)) or (not(Y)) not( X or Y ) = (not(X)) and ((not(y))
They're occasionally useful in proofs (although the proofs that you get in second-year philosophy courses tend to be fairly contrived examples), but absolutely vital in electrical engineering. You see, if you have a chip with 3 OR gates and 3 AND gates, and you need 4 OR gates... deMorgan's to the rescue!
I especially loved one kid's reaction to me showing him how to double-negate a term to make it more obviously succeptible to deMorgan's. "You can do that?!" heh, yep.
(the term in question: (not(A) and (B)). Make that not(not(not(A) and (B), and it's easier to see that you can use the first of those laws. Oh, and if you write not(A) as "A with a bar on top" on paper, it's also easier to see)
In the students' defense, deMorgan's is the kind of thing that only my brother would understand in a lecture. I mean, until you do a few examples by yourself... ideally under pressure of "I don't have enough logic gates!"... these laws seem stupid and pointless.
I was chatting with the instructor after the lab, and he said that the double-negation (i.e. X = not not X; feel free to add not-nots whenever you want) is one of those mathematical tricks that are really annoying the first time you see them. Judging from the pleasure I received from pointing out this trick during the lab... and from my own twisted personality... I definitely agree with this!
On a more serious note, I was struck by how much (most of) the students enjoyed the lab. They were really proud about building a half-adder (that's something which adds two single-bit binary inputs, without even a "carry" input). Many groups were anxious about moving on to the next question (building a full adder), because they didn't want to dismantle their half-adder. (even though it stated on the lab sheet that they should use their half-adder in the full-adder!)
I even wondered how my life would have gone differently if I had signed up for the first-year computer architecture lab at SFU... I avoided doing any labs, because they seemed like a lot of work and I was insanely lazy in those days (hey, I was doing Philosophy!). But if I'd taken a lab or two, would I have gotten "bitten" by the "building things" bug? Would I have switched over to doing electrical engineering as my first degree?
This isn't really a regret -- my life progressed the way it did. If I were into regrets over my academic career, I'd be bitterly cursing the SFU computer science regulations at the time and the dot-com boom. Due to the dot-com boom, there were tons of CS students, so upper-level courses were restricted to declared majors. But to do the department regulations, you could only become a declared major if you took PHIL 001 "critical thinking". Since I was doing a philosophy honors degree and that course was so easy it didn't count towards a major (let alone an honors!), I refused to take the course, and as a result I ended up doing discrete mathematics.
Now, I don't dislike discrete mathematics. But I feel much more at home approaching it from a CS perspective. I like to be motivated by the thought of writing a program to create sudoku squares or discover music phrases; studying combinatorics or cliques in graph theory just for the sake of aesthetics doesn't get me particularly enthused.
I guess I have been bitten by the "build things" bug.