Owen Densmore wrote:
... > Really hip programming teams will define a subset of all these systems > that are platform independent -- i.e. work on all systems. They will > stick to these subsets, understanding that sometimes constraints > really are freedoms. I have a colleague who insists that the only such subset is C, straight up, no ++, no #, no Objective, no C-like scripting language. She insists that if you stick with ANSI C (C89) you will have code that is highly portable - as long as you stay out of the hardware. She considers C90 is just as portable, but is suspicious of C99 as it is still catching on in some places. -- Ray Parks [hidden email] Consilient Heuristician Voice:505-844-4024 ATA Department Mobile:505-238-9359 http://www.sandia.gov/scada Fax:505-844-9641 http://www.sandia.gov/idart Pager:800-690-5288 ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Parks, Raymond wrote:
> Owen Densmore wrote: > ... > >> Really hip programming teams will define a subset of all these systems >> that are platform independent -- i.e. work on all systems. They will >> stick to these subsets, understanding that sometimes constraints >> really are freedoms. >> > > I have a colleague who insists that the only such subset is C, > straight up, no ++, no #, no Objective, no C-like scripting language. > She insists that if you stick with ANSI C (C89) you will have code that > is highly portable - as long as you stay out of the hardware. She > considers C90 is just as portable, but is suspicious of C99 as it is > still catching on in some places. > are _better_ than others? For example, the compilers that implement at least 10 year old standards instead of 20? No, it's not freedom to be constrained to 20 year old programming language standards. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Marcus G. Daniels wrote:
> Parks, Raymond wrote: >> Owen Densmore wrote: >> ... >> >>> Really hip programming teams will define a subset of all these systems >>> that are platform independent -- i.e. work on all systems. They will >>> stick to these subsets, understanding that sometimes constraints >>> really are freedoms. >>> >> I have a colleague who insists that the only such subset is C, >> straight up, no ++, no #, no Objective, no C-like scripting language. >> She insists that if you stick with ANSI C (C89) you will have code that >> is highly portable - as long as you stay out of the hardware. She >> considers C90 is just as portable, but is suspicious of C99 as it is >> still catching on in some places. >> > Perhaps it is worth at least entertaining the idea that some platforms > are _better_ than others? > For example, the compilers that implement at least 10 year old standards > instead of 20? No, it's not freedom to be constrained to 20 year old > programming language standards. Hey, like I said, it's not my idea but a colleague's. I ran up against it when I suggested writing something in C++ rather than ANSI C. However, constraints sometimes free one by eliminating the difficulty of choice. One can move on without worry that the choice is wrong. -- Ray Parks [hidden email] Consilient Heuristician Voice:505-844-4024 ATA Department Mobile:505-238-9359 http://www.sandia.gov/scada Fax:505-844-9641 http://www.sandia.gov/idart Pager:800-690-5288 ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Given that a Steve Yegge blog post started this discussion, you might be interested in another (earlier) post of his....
http://steve-yegge.blogspot.com/2007/02/next-big-language.html and something more for the Javascript fan-boys: http://javascript.crockford.com/javascript.html BTW having worked with a number of mainstream languages - FORTRAN, C, C++, Java, Javascript, XML, XSLT and lately Ruby....I'd have to say I'm having a "lot of fun" working with Ruby right now. However, I'm not building web sites and I haven't used Python (shame), so I'm missing the point on where Ruby "fails". Perhaps also I've been doing too much BPEL lately and every language looks good from there. One annoying thing I'm finding with loosely typed Ruby is that the compile/debug/compile cycle has been replaced by unit-test/debug/unit-test. The thing that saves me though is that Runit and Rake make unit testing very easy. This really irritated me with Javascript when I was using it heavily a few years ago because it lacked a good test environment. Perhaps that has changed. (and yes I mention XML and XSLT as programming languages because that is the way that so many "enterprise systems" treat them - look at BPEL for example....but then maybe you shouldn't). Regards, Saul ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by glen e. p. ropella-2
On Feb 16, 2009, at 12:12 PM, glen e. p. ropella wrote:
> Thus spake Prof David West circa 14/02/09 01:24 PM: >> Language selection reasons like, "it is too hard to learn," "memory >> leaks," "it runs faster," "Java developers are cheaper because >> there are >> more of them," etc., are really dumb reasons for choosing a language. >> Instead you should focus on your application domain, your reason for >> creating the software in the first place, your working style, and how >> well you truly understand the problem, and any potential solution to >> that problem, you are trying to address. > > This paragraph seems a little self-contradictory. When I write a > program for a client and that client's requirements include taking > over > and developing the code themselves, then choosing Java because "Java > developers are cheaper because there are more of them" is not only NOT > dumb, it reflects a "focus on your application domain, your reason for > creating the software in the first place, ...". The same goes with > the > "it is too hard to learn" justification. Often, the system > requirements > flow down to those justifications. And they're not dumb if they > follow > directly from the requirements. At my previous job at a non-profit I could have written some systems administration scripts/programs in Ruby or Python. I chose to write everything in BASH (except one or two things I had to do in PERL) because I felt it would be irresponsible of me to leave the job to a recent college graduate (as occurred) who might not have had any experience with Ruby (for example). I guess my point here is that, if I understand what he is saying, I agree with Owen about problem of the silo of IT vs. what I consider "real programmers" to be doing. IMHO, the needs, considerations, and motivations of a sysadmin writing maintenance/monitoring/reporting scripts vs. someone hired to write a custom webapp for an organization are quite different. I said I'd never dare call myself a programmer because of my skill level, and because I don't get paid to write software. I do mostly systems administration, in the context of which I write programs as a consequence thereof. While it could be argued that sysadmins get paid to write software too, I think many of of are in a different league that is many echelons below the skill/experience level of those of you who write functional applications of more complexity, features, and containing more lines of code in a variety of languages. Also I consider the "maintenance software" as more OS level work and webapps or desktop applications more application-layer programming. Returning to Owen's point, I wish to educate myself away from the silo mentality and approach as I hope doing so will improve my skills and work and potentially open the door to more versatility for myself and the people I'm serving in doing my job (supporting bioinformatics research). To address the same point obliquely from another angle, my decisions on script language are as much based on organizational and personnel needs and sticking to longstanding and well-understood standards as anything else; with particular emphasis on who would inherit the maintenance of the software, however simple that software may be (RAID hardware monitoring and reporting scripts). The last script I wrote in PERL ran 2x faster when re-written in Python, but at 500ms execution time vs. 1 second who cares other than me and does it even matter? At my first real Unix job 13 years ago I was promised training I never received and am grateful that the mission critical systems I was examining in my learning process were running standard software and relatively uniform shell scripts. I'm not sure how well I'd do if I inherited the same situation today and %20 of the administration software was written each in a different language (C, BASH, PERL, and Ruby.....however similar they might be in some respects). But, from a learning standpoint, both I and my successor's probably would have learned more if I'd left the same script behind in two to three languages (BASH, PERL, and Python) though in reality since time is my most precious commodity and at a premium, it's unlikely I would have been able to do that. As skill increases, however, this becomes possible and is an idea that intrigues me. When I was young and naive I complained about the way my predecessors did things when I inherited scripts, Unix systems and the like. Today my attitude is more that "if I don't like it, I should change it, and if I lack the skill to do so I have no right to complain". From another point of view maybe in my field the language of maintenance scripts is irrelevant and it's the job of whoever inherits the job to deal with what they find, or learn, or find another vocation. I don't have a hard and fast answer to this question of "what's appropriate to leave behind?" but I know I have tried to stick to basics and portable maintenance programs (BASH) whenever I can, keeping in mind the person who might inherit my work (instead of leaving a C binary and throwing away the source code). Lastly, I'm very grateful for this discussion as it's both very interesting to me and I feel I'm learning from it. Thank you, -Nick ---------------------------------------- Nicholas S. Frost 7 Avenida Vista Grande #325 Santa Fe, NM 87508 [hidden email] ---------------------------------------- ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Thus spake Nick Frost circa 16/02/09 03:01 PM:
> because I felt it would be irresponsible of me to leave the job to a > recent college graduate (as occurred) who might not have had any > experience with Ruby (for example). > > [...] > IMHO, the needs, considerations, and > motivations of a sysadmin writing maintenance/monitoring/reporting > scripts vs. someone hired to write a custom webapp for an organization > are quite different. I said I'd never dare call myself a programmer > [...] > I think many of of [us?] are in a different league [...] those of you > [...] > Returning to Owen's point, I wish to educate myself away from the silo > mentality and approach as I hope doing so will improve my skills and > work and potentially open the door to more versatility for myself and > the people I'm serving in doing my job (supporting bioinformatics > research). What you are talking about (re: avoiding irresponsibility and separation of sysadmin vs. application development) falls squarely under requirements determination and flowdown. It's true that sysadmins are different from app developers. And it's true that sysadmin work is different from programming. But those are fine distinctions within the work of building computer-based solutions that solve some problem. The larger umbrella you're looking for is systems engineering. The "silos" are there because everyone is self-centered and, for good reason, thinks more about their field of expertise than they do about others'. This is not all bad or all good. It's specialization and, as humans, it helps keep us at the top of the food chain. Everything you describe doing sounds like exactly the right way to think about and do it. The next trick is to transition from informal, intuitive requirements extraction, analysis, and satisfaction to more formal, repeatable, and communicable processes. Just remember to be skeptical when anyone tries to tell you that such engineering can be linear or acyclic. A strong indicator for linear thinking is the degree of conviction to some assertion. [grin] E.g. if someone tells you that language ABC is the best language for XYZ, then they're most likely a (naive) linear thinker and prize abstraction over concreteness. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Nick Frost
Thus spake glen e. p. ropella circa 16/02/09 16:02 PM
> The next trick is to transition ... to more > formal, repeatable, and communicable processes. There are no such things. Formal only applies in the small number of cases where the domain you are trying to understand and in which your software is to be deployed is itself a formal system; e.g. compilers, OS kernels and most utilities, and the kind of programming done by systems programmers focused on the machine itself. Repeatable is a myth, a religion actually, pushed by Carnegie-Mellon Software Engineering Institute with their Capability Maturity Model and their fellow travelers. Repeatable applies only to production processes and software development is not a production process. (It involves, in a fairly trivial way, a production process.) Communicable is a totally local phenomenon - that is to say that a team of people working on a project can establish communication among themselves and can establish a group consensus of "what it all means" but that knowledge is NOT transmittable to others not a part of the group. In fact, if the group holding the knowledge disbands and reconvenes after some period of time (as short as six months) it is not communicable to themselves. Forty years of empirical evidence to the contrary - the software development field still insists that engineering is a good metaphor for what most software developers do. It is a terrible metaphor. > Just remember to be > skeptical Absolutely! > dave west ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Nick Frost
Thus spake glen e. p. ropella circa 16/02/09 11:12 AM
> When I write a > program for a client and that client's requirements include taking over > and developing the code themselves, then choosing Java because "Java > developers are cheaper because there are more of them" is not only NOT > dumb, it reflects a "focus on your application domain, your reason for > creating the software in the first place, ...". It IS dumb. Not for you as the developer, but on the part of the client for making it a "requirement." All the different ways it is dumb are too numerous to go into here - but many of them derive from the fallacy that developers are a commodity. > > Likewise, if I write a program that is intended to run on a > microprocessor and will hook up to devices that require high I/O rates, > then "it runs faster" is an excellent reason for choosing a particular > language. "Smalltalk is too big and too slow to use for telephone switching systems with millisecond time budgets." "C runs faster than anything except assembler." We are developing an embedded telephony switch therefore we absolutely positively have to use C. Dumb!! I have seen Smalltalk applications for telephone switches that outperformed C code doing the same job with the added advantage that they took 6 months to develop in Smalltalk and 18 months using C. Moreover, the C team were all senior programmers with lots of experience writing C and the Smalltalk team had less than two years experience with that language. The dumbness arises from a failure to take into account all the factors - most notably design - that could address and resolve the performance requirements. Years ago, companies were demanding that all of their applications be written in C++, because of speed and because of "features" like multiple inheritance and friend declarations that "improved the efficiency of your code." In the process they incurred millions of dollars of technical debt for no real reason. Stroustroup himself admitted in a keynote speech that in his career he had encountered exactly one instance where he positively absolutely could not have satisfied performance requirements without using multiple inheritance (and another for a friend declaration) which means that 99.9% of the software written in companies that mandated C++ could have been written in other languages at less cost. That is absurd waste. > > > So what I hear you saying is choosing a language because of its _effect_ > is dumb, instead you should choose a language because of the _cause_ of > that effect. No, I am saying that you should seek an isomorphism between the language you choose and what it is that you want to express. All programming languages were created with a purpose, based on a philosophy and a set of values. This is very evident when you read the ACM history of programming languages books. If the intrinsic 'philosophy' of a language is at odds with what you want to say you will be forced to use convoluted and complex expressions to say what you want, while in another language your expressions would be elegantly simple. I am not talking about grammar here, but design. The increasing interest in domain specific languages is based, in part, on this idea. > > But we've seen that > natural systems don't succumb as easily to solutions built with > linearized methods, which is why "agile methods" are so popular these > days. > So if you want to find elegant solutions for problems arising in natural systems you should choose a language that was explicitly designed to support exploratory, iterative, incremental, adaptive, and evolutionary development; that is not strongly typed (because the real world is not); that hides implementation details (like memory management); and, supports the use of domain vernacular in its grammar, etc. etc. Sounds like Smalltalk. [grin] davew ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Nick Frost
They had the seed of one. Self did not have the class library and range of functionality of Smalltalk and, more importantly for Sun, it had no user base; at the time that Smalltalk was being touted as the "next COBOL" because of the extent to which is was being used in industry. There was, from what I have been told, an internal discussion about using Self as the foundation for Java instead of Oak - but the Web, which at the time required apps with very small footprints, became the dominant decision factor. The embedded, portable, VM, characteristics of Oak won out. My memory may wrong here, but I don't think Self had a VM while Smalltalk did. In fact, the Smalltalk VM could read Java bytecode, allowing you to create hybrid apps that intermixed both languages. davew ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Prof David West
Thus spake Prof David West circa 16/02/09 05:47 PM:
> Thus spake glen e. p. ropella circa 16/02/09 16:02 PM > >> The next trick is to transition ... to more >> formal, repeatable, and communicable processes. > > There are no such things. Yes, there are. But you may not be hearing those words the way I'm saying them. ;-) By "more formal", I don't mean absolutely and purely syntactic with no semantics ala Hilbert. I mean formal as in "relating to or involving outward form or structure. That applies to _all_ cases everywhere, even if the form or structure is implicit. By "repeatable", I don't mean "the ability to do exactly the same thing multiple times and multiple places." I mean "the ability to perform similar tasks with similar results." That's not a myth. Repeatable processes exist and I use them on a regular basis. Here's an example: 1) write code, 2.1) compile code, 2.2) run executable, 2.3) (finished ? goto 3 : goto 2.1), and 5) Analyze results. Not only is there a repeat within that example; but the process occurs throughout my work. And communication _does_ happen, however noisy it may be. People do learn, say, a build process when another person explains it to them and/or shows it to them. Such processes can be written down and communicated across a wide variety of people for as long as the system works mostly as it did when the document was created. So, more formal, repeatable, and communicable methods do exist. Just as _less_ formal, repeatable, and communicable methods exist. > Forty years of empirical evidence to the contrary - the software > development field still insists that engineering is a good metaphor for > what most software developers do. It is a terrible metaphor. You're throwing the baby out with the bathwater. It's true that software is _somewhat_ different from, say, bridge building. Or the work a chip designer does to build a chip. But it's not _that_ different. This "software developers are from another planet" garbage is a smoke-screen and an excuse for a prima donna attitude.... or worse, an extension of the teenagers' or artists' "I'm so misunderstood" lament. To engineer means "The application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems." If programmers are NOT applying scientific and mathematical principles to practical ends, then they are NOT programmers.... I don't know what they are; but they're not programmers. Granted, I'm not saying that the CMM is right. I'm also not saying that the RUP or the agile methods are right. In fact, I think they're all dangerously wrong. But to claim that regularity and formalization (in the sense of "more formal") are impossible and non-existent is unjustified and irresponsible. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Prof David West
Thus spake Prof David West circa 16/02/09 05:51 PM:
> It IS dumb. Not for you as the developer, but on the part of the client > for making it a "requirement." The client doesn't _make_ it a requirement, as if requirements are created willy-nilly by some air-headed marketing type (no offense intended). The client is living within a context full of constraints, some of which disallow the open-ended search for a more optimal solution. The requirements come from the objectives of the project and the constraints that obtain in the environment. It is definitely not dumb. It's reality. It's not dumb to derive requirements from the set of extant resources and constraints. And sometimes the derived requirements point explicitly to a particular language. > "Smalltalk is too big and too slow to use for telephone switching > systems with millisecond time budgets." "C runs faster than anything > except assembler." We are developing an embedded telephony switch > therefore we absolutely positively have to use C. Dumb!! I have seen > Smalltalk applications for telephone switches that outperformed C code > doing the same job with the added advantage that they took 6 months to > develop in Smalltalk and 18 months using C. Moreover, the C team were > all senior programmers with lots of experience writing C and the > Smalltalk team had less than two years experience with that language. Heh, all I can say is "so what?" Are you simply telling me anecdotes about how people have made premature and unjustifiable claims in the past? Well, ... duh! Obviously people make premature and unjustifiable claims all the time. But that doesn't mean that _every_ time someone chooses C++ over Smalltalk because of execution time, it automatically means they made a dumb decision or that their justification is dumb. Yes, if they jump to premature and unjustified conclusions, then it's ... well, premature and unjustified. But it may not be dumb! Perhaps that's the only justification they could come up with and if they spent the next 50 years trying to find a good one, they'd be laid off and another yahoo willing to "shoot from the hip" would have taken her place and made millions of dollars on an IPO? [grin] (with which she can then spend the next 50 years figuring out and yapping about what the _best_ language would have been) This goes back to my original point: requirements are often _complex_ and cannot be linearized or deconvolved without doing severe damage to the understanding of the problem or the solution. > The dumbness arises from a failure to take into account all the factors > - most notably design - that could address and resolve the performance > requirements. But if, say, C++ is explicitly _derived_ from the requirements, then it is a smart choice. Likewise, if Smalltalk is derived from the requirements, then it is a smart choice, regardless of any linearized conception you or any other outsider may have of the problem and solution under consideration. > Years ago, companies were demanding that all of their applications be > written in C++, because of speed and because of "features" like multiple > inheritance and friend declarations that "improved the efficiency of > your code." In the process they incurred millions of dollars of > technical debt for no real reason. Stroustroup himself admitted in a > keynote speech that in his career he had encountered exactly one > instance where he positively absolutely could not have satisfied > performance requirements without using multiple inheritance (and another > for a friend declaration) which means that 99.9% of the software written > in companies that mandated C++ could have been written in other > languages at less cost. That is absurd waste. [grin] You act as if you _know_ all the facts and circumstances. That's nice that you're so convinced. I remain skeptical of _both_ corporate policies mandating languages _and_ your accusation that these particular ones were an absurd waste. It reminds me of many of the "business process reengineering" people I've met. They often have _no_ idea of how efficient their current business processes are. They just know that they're getting pressure from their stock holders, board, and management to change _something_ ... _anything_ ... because the performance of the company (as measured through some myopic lens) isn't satisfactory. You think such a corporate mandate is a waste. But do you really _know_ that? Can you _demonstrate_ it? (or at least provide some experimental data to show that it was a waste ... "experimental" in the same sense as scientific data, by the way ... not some long-winded prophet who writes books for a living) > No, I am saying that you should seek an isomorphism between the language > you choose and what it is that you want to express. Again, you're linearizing the requirements process. Language choice is a result of all the _many_ and intertwined requirements for any given project, including the time you have, the money you can pay, the people you have available, _and_ what you want to express with the language. > So if you want to find elegant solutions for problems arising in natural > systems you should choose a language that was explicitly designed to > support exploratory, iterative, incremental, adaptive, and evolutionary > development; that is not strongly typed (because the real world is not); > that hides implementation details (like memory management); and, > supports the use of domain vernacular in its grammar, etc. etc. Sounds > like Smalltalk. [grin] Heh, _no_. I'll use whatever languages _allow_ me to find elegant solutions for problems arising in the particular natural systems I'm trying to represent. _Whatever_ languages they happen to be. Why? Because the objective is to actually get the work done, not pontificate on or wring my hands in subservience to "What One Should Do" ... like some 10 commandments of the Software God. It sounds like you don't actually object to formalized, repeatable, and communicable decision making processes. It sounds more like you merely require others to use _your_ formalized, repeatable, communicable process. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Prof David West
Prof David West wrote:
> Years ago, companies were demanding that all of their applications be > written in C++, because of speed and because of "features" like multiple > inheritance and friend declarations that "improved the efficiency of > your code." Even Squeak introduces `traits' for aspect-oriented programming -- in the spirit of multiple inheritance..and not unlike Ruby's mixins too. Don't like features, don't use them... ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
In reply to this post by Prof David West
Hi,
Here's a new language that may be of interest: a dynamically-typed rewriting language, sort of a cross between Smalltalk and Haskell. http://code.google.com/p/pure-lang/ Note the native-code LLVM backend. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org |
Free forum by Nabble | Edit this page |