mrbadger wrote:Observe wrote:kohlrak wrote:I think he means he hated actually programming professionally (given his academic nature, it seems logical), so he likes to focus teaching power users at best, rather than programmers.
Yes, I understand that. I have the utmost respect for mrbadger and meant no offense whatsoever. I was merely trying to make the point, that perhaps part of the problem with new graduates, is that they have been instructed by those who do well in academia, but who may not be the best to train for the workplace.
The sort of jobs I was offered were always ones that would pay a lot but take up all of my time and be no fun as far as I could tell.
Microsoft wanted me of their Software Quality Assurance team, based in Ireland. That would have meant becoming an 'expert' on all of their major code-bases, and being part of the team that made sure everyone was keeping to standards.
Lots of money, but no free time.
Or there was Google, where I would have worked on Google Earth. However to do that I needed to move to Switzerland, without my son. Again money was good but not enough of a reason.
Or Toshiba. I was never quite sure what Toshiba were offering except the pay was huge and it was a research post with, as far as I could tell even less free time.
Moving to the US to work for Microsoft was offered as well, but that would also have meant leaving my son, so I never even considered that.
On the other hand I have always wanted to be an Academic. Even being a nurse was just a step towards that. An odd step I admit, but a step still.
The pay is a fraction of what I could earn, but I love to teach. I always have.
I have always been of the opinion that it's more important that you enjoy your job then do it just for the money.
This is also a thing I try to explain to my students. Personal happiness and work satisfaction is vastly more important than any wage slip.
In my case that's why I was so eager to return to work after my brain injury, in spite of how difficult that was.
See, there inlies another issue. The pay matters, but that doesn't mean you should be blind to other factors if the pay is high enough. You want to have enough to be comfortable, then ask yourself if you'll need more in the future. If not, why take the extra money? It's also a much higher risk. You going to take all that money into the after life? Want to give it to charity, fine, but you gotta ask yourself certain questions at the end of the day. The polar opposite of this is where sometimes you have to be like me and take jobs you're not satisfied with, simply because the jobs that would please you just don't pay well enough for you to even pay the bills, let alone let you live comfortably.
Morkonan wrote:kohlrak wrote:... I would argue that economics is actually more useful than philosophy in this matter, since philosophy isn't overly useful in making programs outside of the logic bit...
All of your reply is noteworthy, I just want to avoid my penchant for bloviating into the available voids.
"To instruct a machine on how to accomplish a task." - Is that it? That's "programming" in a nutshell, right? During that, one may also instruct the machine to accomplish related tasks and how to accomplish them. Along the way, there will also be several "do this" commands, with the mechanisms for doing that already present in the machine's design.
Very simplified assessment. For someone who doesn't do it, that's well enough. However, in the thick of things, that's way too simple, and I feel too many people have that mentality when approaching it when they're part of it. It's like being a competitive weight lifter with the attitude "i pick things up and set them back down." No, you gotta challenge yourself, and even thicker into it you find that sometimes you want to go lighter than you need, sometimes you want to go heavier, and there's a massive strategy behind it.
Then, stuff changes, and new operators are added to the tools (languages) used to "do programming." Occasionally, something radical causes a... "paradigm shift" and new low-level opportunities are made available which flow up the chain to "programming." That's a one-way influence, isn't it? Or, does innovation in programming language, presumably in more efficient operators, influence hardware as well? I can see hardware being optimized, of course, for language, but how "innovative" is that? Enough for a shared dynamic, each influencing the other?
Let me explain the issue you're describing with some programming concepts to give you a realistic perspective, rather than a purely theoretical one. See, we have this concept called "abstraction." We do this with everything.
You can "abstract down" (which is how we normally learn about the objects in our life), where you take something, say, a basket ball, and say "that's a ball." Then you find out that it bounces, that it has a texture to allow friction to help you spin the ball (either when shooting or on your finger), that it's made of rubber, that it has those grooves on it which can affect it's flight path, and so forth. You have a similar thing with baseballs, except they have different properties. Then you have beach balls, volley balls, tennis balls, marbles, and so forth. All balls can roll, so if you need to roll something, you can grab any one of those balls. That's an abstraction, you can use any ball to fulfill the task of rolling. If you need something to relieve muscle knots, you need a ball or roller that can roll, but is hard enough to put pressure on a small point. A tennis ball may or may not work for this (usually does), but you can definitely rely on a baseball or basket ball to do this. These are abstractions: you're taking something complex, but simplifying it. For most objects, we abstract down and slowly discover how it's different from other objects that we give the same name to. The less complex name is the abstraction. We even do this with people. "Man" or "woman" often comes first, followed by "white, black, indian, asian, etc." Rather than having to store the entirety of the person in your brain, you can recognize properties of them that separate them from other humans, and these properties are often rather simple, and thus aren't as much of a problem to remember, which is why we can remember if someone's skinny or fat, but we can't remember what their face looks like.
You can also "abstract up," like with legos or building blocks, where, if you wanted to build a Pirate Bayamon, you would build a gun, out of those blocks, then you build 3 more like it. You build wings, and you build a skull. You build a body. Each of those then get mixed together to make the bigger bayamon. We make qualities, then combine the qualities. This is less common, but it is precisely what we do with programming, except it doesn't need nearly as much planning as building blocks, because, the first step of buildng the gun for the bayamon, you could potentially use the gun before you put it on the wings.
Now, the problem that you're describing comes in that, back to the ball example: while the task of rolling some balls are better suited to the task. Heavier balls roll further. Basket balls take up alot of space. This represents the concept of "optimization" in programming. What's easier for the programmer is getting it to work, but it comes at a cost of being a bloated mess. But, if it's optimized, it cost the programmer and/or company alot, and thus as a customer it will cost you more, but at least when it's running it runs much, much better. So, in order to try to get the best of both worlds, sometimes the basket ball gets redesigned, and you rely on a regulation basketball, but end up with a "little kiddy basketball" that's half the size and it doesn't quite work anymore, because you're relying on the mass to also make the shots go in, and that smaller ball just doesn't have enough mass for inertia to pull it as far, and you start missing shots. This is called deprecation. Or, let's change it up and refer to buildng blocks again: imagine they started replacing all the yellow blocks with 4 bumps in them, and if you want yellow blocks, you need to use one with 6 bumps, or if you want to use a block with 4 bumps, you have to use a blue one instead. As far as theory is concerned, you should be able to rely on your abstractions, but in practice this isn't the case, as some libraries become "deprecated" or "unsupported" (not the same, but neither are they mutually exclusive terms). Maybe what you wanted all along was a yellow block with 4 bumps and you've been doing it the other way for some time now, and you can finally do what you wanted to do in the first place. Those are the times that are awesome, but they don't happen all that often.
So, the problem is, hardware changes (building blocks), drivers change (the communication method between the button that fires the gun and the gun itself), the APIs change (the button to fire the gun), and the programs change (there the button is placed in the cockpit). And, yes, it's even more complicated than that. Some changes allow the benefits without changing any of the levels "above" it, and those changes are awesome. Unfortunately, in reality, it usually requires some kind of changes, or an extra "abstraction layer" (aka "wrapper") to make things even work like they did before, and sometimes without any of the benefits, so the extra wrapper complication is more code, more work, and more room for bugs, so often times what we think are improvements actually aren't. Sometimes, though, that's necessary, because the gun design doesn't exist solely to be fired from your bayamon, but other bayamons, and ships that are not bayamons.
Programming languages are just one layer. The good news is, the higher layers don't directly affect the lower layers.
Babbage moved automatons and windmills from simple, fairly single-purpose, tasks to a multitask capability. A machine that, within physical constraints, would do what you told it to do. That's more than the remarkable checkers-playing clockworks or variable speed transmisisons did.
Philosophy, from what little I know of it, will often use a rigorous set of principles in order to explore a concept or present an argument. The "rigor" presented in such things is what is important. If one strays, all is lost, since it naturally devolves into babble. Unless one creates a suitable foundation, already proven in its stability, the house collapses. Well, unless one invents a new, equally firm, foundation from which to make an argument or explore a concept.
Create a thing from a set of rules.
Tinkertoys. Legos. Building blocks. I don't mean just "objects" but constraints. You can't tell a machine to do something it isn't designed to do. You can force it to accomplish a task it wasn't designed for, but you can't force the "how" in terms it accomplishes that task if there is nothing in the machine that can support that "how." I can't tell a television to put a balloon on a string. But, I can certainly instruct a television on how to display a balloon on a string. I have hardware constraints that must be obeyed.
At the lowest level, it's all electrons jumping around. One day, it may be photons. Until then, innovation at the basic level revolves around more betterer ways to get the electrons to "do." Sometimes, there are new ways to get the electrons to "do." There could even be ways to organize that "do" that are innovative and new. Parrallel vs serial, etc.
It's all about rigorous rulesets at varying levels. Sometimes, people write new rulesets and tell the machine how to interpret those. The new rulesets make it easier, more betterer, to do certain things using that set of rules.
At the most basic level, considering this, perhaps teaching new computer programmers how to achieve things within a very narrowly defined set of rules is what should be done?
Assembly and BASIC are popular for this reason: they're lower level tools with simpler things, upon which the newer higher level languages are based. This means everything that exists can be translated into them, and thus familiarity with the simpler tools that are "harder to do things with," you can explain mov, add, sub, jmp, cmp, jX and how labels work, you should be able to pretty much explain any higher level programming language, given they're building upon those fundementals. People are afraid of this, though, because sometimes these fundemental concepts can be "hard to teach." Instead, alot of teachers avoid these fundemental concepts, and avoid certain features of the "easier tools," because they rely on the fundemental concepts that they don't wish to explain. This is a huge problem with foreign language learning (especially Japanese and english). This "if they need it, they'll figure it out on their own" mentality might be true, but you are wastefully making them learn what they inevitably need to learn a harder way. Maybe they don't need to learn assembly, but they will need to learn pointers, and pointers is way easier to teach in assembly. I understand, you need pointers to do assembly, but always will. You're just putting off pointers by avoiding the topic, and they'll learn them one way or another, even if it's the hard way, especially without a complete grasp of the topic (thus leading to incorrect assumptions that then become bugs) because you're lazy.
Teach them how to win at chess. Teach them how to win at Go. Then, give them a variety of "games" and tell them to "win." Then... tell them to make their own game, with their own rules, and make it winnable.
Electrical engineers are modern-day wizards. They must learn practical physics. They have to know "how", but they also have to know the "why" that lies behind the "how." If they don't, they end up causing lots of problems, like buildings burning down and equipment exploding... It's important.
This happens with programs, too. Most of the time if you have the right abstracted parts, you can get things right or "good enough." My girlfriend's father is an electrician, and you should hear some of his horror stories. You'd be surprised how much this incompetence exists in electrical engineering as well. You'll find, with lots of research, that the incompetence levels you see in your own field are in pretty much every field. For better or worse, the results are, by pure luck i imagine, rarely the worst case scenario. The curse of being a professional in something dangerous is knowing that your peers are incompetent, even after normalizing for Dunning-Kreuger effect.
How much does a programmer need to know about the "why?" At what point does the pursuit of that knowledge cause the student to begin deviating deeper into the engineering aspects rather than the "playing the game by the rules and winning?"
A "programmer" should, because of their ease with using rulesets to accomplish tasks, be able to pick up a new language and use it proficiently to accomplish tasks. IMO, they should be exposed to as many toolsets as possible. They should be instructed how, in general, these toolsets work and, to a certain extent, "why" they work. For instance, they should have a series of instruction concerning how languages, the tools, are constructed and how they work to move electrons around. (Assembly introduction and concepts relating to even lower-level functions as well as how interpretations of languages are created.)
But, I don't think that, for a programmer, they must "know" Assembly. What's important is their ability to acquire a new tool (rule-set/language) when it is needed or available and to use that tool to accomplish a desired task with an acceptable level of proficiency. Their practical, workable, knowledge may not need to extend beyond the level of the compiler. They know what it does and, to some extent, how and why it does it. But, they don't need an operational level knowledge below that in order to serve their purpose.
But, what if the level of proficiency, in order to be acceptable, must be operational level? To that degree, the latest people asking for my help get the following explanations from me. I start with electrons moving in a wire, then move to diodes (and how they work), then move to transistors, then work on a basic logic gate. From that logic gate, we can abstract a simple adder (simple addition), a lamp display, and switches to display information in binary notation. The inputs are the switches, the output is the lamps. Once you understand an adder, you know that other circuits exist, and you are free to look them up, if you wish, but you don't have to. As long as you know the adder, you understand enough to move up a level. We'll assume that we can add another set of switches that decide the operations. Then we can use another set of switches to say which array of switches becomes the inputs. Next you get a solid state input, which can be the result of an output, so that you can then use the output of a previous function to input the next function. At this point, you have a very obnoxious to use calculator. But, from there, we can remove all the switches, and fill the solid state memory based on switches and buttons. From there, we can easily build a handheld calculator, or start talking about RAM. Once you have RAM, you can start talking about ROM. Once you have ROM and RAM, we can run a program. Once we can run a program, we can start talking about adding other hardware. Once we have all that other harware (speakers, screen, buttons), these things can have processors, RAM, and ROM of their own (sound card, video card, keyboard [and yes, keyboards do have their own microcontrollers]). Then you have a need for a driver as an abstraction layer for programmers to have a common interface for different machines and hardware. Then as everyone realizes everyone has their own ideas, techniqes, etc, we find that software is forever expanding and adding more abstraction layers (OS provides common APIs, which use common drivers, which use common hardware, etc). If you want to know any more, you have the basics necessary to follow up with your own research. Whereas, without those basics, you can't ask "how does an operating system work?" because the answer might not be at the level you need it to be to actually understand (either too simplified to answer your actual question that has you looking into the topic, or too complex and you can't wrap your head around it).
An engineer, though, would need that level of knowledge. They build the stuffs that make the "dos."
Everyone does. Trust me. Imagine someone with website building knowledge only being tasked to write drivers for a new military laser for shooting down incomming missiles, simply because he's certified as "the computer guy." Obviously, the militaries around the world are more competent than that, but are they competent enough? This scares me. Would they settle for someone who has experience writing CD-ROM drivers?
mrbadger wrote:I'm not quite sure what's going on with pure Computer Science.
I can only talk definitely for my university, but the subject doesn't really seem to have realised yet it's not preparing people for academia.
That was true when I was an an undergrad in the early 2000's, and it seems to still be true. They won't let go of this 'we are training academics' mindset, even if they say out loud that they have.
...
Experimental vs practical. Engineers vs architects.
Every time someone comes up with a good, new, tool, the user environment erupts. An MRI revolutionizes medicine, a new plow revolutionizes farming.
Are the students tool builders or tool users? What is it that is being produced versus what it is that one desires to produce?
This is the kicker with computers, and why it's especially difficult. We see it a little with arduino, but, in reality, we both make and use the tools. This topic is often hard to me in terms of game dev. I really want to make a game where it's moddable at the core. Thus, i'm a tool maker, not a tool using game maker.
I have had ideas for awesome sandbox games before minecraft ever came out. I had this really cool idea of elder scrolls kind of thing in space, but users could build scripts within a specific API and just script new races in, new ships, new campaigns, etc, like everyone's doing right now. But, even cooler, was the idea that the AI would actually try to live in this universe despite the changes, and actually try to adapt to the mods. Those plans have since evolved quite a bit, and i've replaced them all for this idea of a virtual machine where programs can call other programs as abstractions, but then could override functions in higher level programs (much like classes in java and C++), but it could do all these things without having to recompile the entire code base. Doing so, I would then make a game with a game world where there are only 3 languages spoken in the whole world, and none of them actually exist in the real world. The languages woul be based on what would make sense for existance in the game world, and thus the AI could legitimately communicate with each other (instead of sharing common knowledge with global variables), and, as a player, your goal is to integrate yourself into this world and learn the language of the starting village (where you woul have "parents" who would teach you the language like real parents do), and to get out of the village and experience the real game, you would have to show proficiency in casting magic, talking to NPCs, etc (but this would be tiered, rather than a single test, so one simple test would be to have you learn how to ask for food and water, and slowly you could ask for more materials), and the AI wouldn't just listen to your commands, but you'd have to earn your rank, and you could suffer losses in this rank by being demanding and never contributing to the projects. The best part? The project is not as ambitious as it sounds. The hardest part would be building the compiler/assembler for the virtual machine. However, a bigger problem is that I know it would need a really patient starting community whom can get into the gameplay to take off. Therefore, i know the project would fail. Granted, none of the ideas are actually new, and have existed in several games before (nethack, adventure bar story [3ds], barony, etc have had artificial languages that are randomly generated on each new playthrough, for example; while fantasy life has shown there's an interest in various different roles in a world; cavestory and other retro games showed that you don't need to have a complex 3d world to make people happy).
Students want a degree and knowledge that they can use to pursue their desired goal. That's true in every institution of learning. If the student wants to create new tools, they need an engineering focus. If they want to learn how to use tools and how to learn to use new ones, they need a practical focus. Both? Well, then they can get a dual degree in both.
In reality, it really is both. Especially those familiar with CLI will quickly find themselves making tools, even if they didn't get into it expecting that. Programming really is all about making tools. Some tool making is more like tool making than other tool making.
Computer "Science" to me indicates something more tool-creation based than tool-use based inasmuch as the desired goal of such things are different. It helps if a race-car driver knows many of the details of how their automobile works, but that doesn't help them achieve the goal of winning automobile races. They have specialized mechanics to ensure they have the best tool possible for that.
You'd be surprised how well a race-car driver knows about their automobile. I personally know a few small-fry professionals. Mechanics seem to be more-or-less sidekicks or hired hands rather than a completely different part of the same team. Just think of how well you know the various ships in X. Now, if it was realistic, every ship of the same race and class would have little quirks about them and be slightly different. Ever notice this about the cars you've driven? Even if there's cars of the exact same year and model, maybe the stearing wheel has a larger "deadzone." Maybe one accelerates faster than the other, even though you feel those differences shouldn't exist. As such, that race-car driver is very familiar with his cars, and many are often under the hood with their mechanics: they want to make sure things are tweeked to their own personal preferences. This is less so seen with, say, an air force, but you still see it. You wouldn't believe how much a pilot learns about his airplane. Falcon 4 BMS has given me an ever so small taste of that reality. The basic rule is: the more competitive the market, the more you have to know to give yourself an edge.
Morkonan wrote:mrbadger wrote:My argument on that has been that we have post graduate courses for specialists.
Undergraduate courses are for generalists, and should be aimed at things people need for the workplace and their careers.
A good solution, probably. Nobody wants a half-trained tool maker.
PS - Geometry!
Geometry is a fascinating subject. I love proofs! Proofs, everywhere! This is this, and this is this... so this must be this, always. Oh, but only if it's just two dimensions...
If it's three, you'll have to sign up for the advanced courses.
Honestly, i'd much prefer a grading system that wasn't based on "pass or fail" but "this is your level of competence." In other words, if you can't handle making functions, you're on the web-design team, because you passed the class with a 10%.