CategoryTechnology

Select the Right NOx Control Technology

Most major industrialized urban areas in the U.S. are unable to meet the National Ambient Air Quality Standards (NAAQS) for ozone. Atmospheric studies have shown that ozone formation is the result of a complex set of chemical reactions involving volatile organic compounds (VOCs) and nitrogen oxides (NOx). Those studies indicate that many urban areas with VOC/NOx ratios greater tan 15:1 can reduce ambient ozone levels only by reducing NOx emissions. Many states, therefore, are implementing NOx control regulations for combustion devices in order to achieve compliance with the NAAQS ozone standard.

This article discusses the characterization of NOx emissions from industrial combustion devices. It then provides guidance on how to evaluate the applicable NOx control technologies and select an appropriate control method.

Characterizing Emissions

Most industrial combustion devices have not been tested to establish their baseline NOx emission levels. Rather, the NOx emissions from these units have been simply estimated using various factors. In light of recent regulations, however, it is mandatory that the NOx emissions from affected units now be known with certainty. This will establish each unit’s present compliance status and allow definition of fee applicable control technologies for those units that will require modification to achieve compliance.

It is, therefore, important to test each combustion device to verify its NOx emissions characteristics. The testing process should be streamlined to provide timely and necessary information for making decisions regarding the applicability of NOx control technologies.

The basic approach is to select one device from a class of units (that is, of same design and size) for characterization testing (NOx, CO2, and 02). Testing is conducted at three load points that represent the normal operating range of the unit, with excess oxygen variation testing conducted at each load point. Figure 1 illustrates the typical characterization test results. The remaining units in the class are tested at only one load point, at or near full load.

The operational data obtained during testing, in conjunction with the NOx and CO data, are used to define the compliance status of each unit, as well as the applicable NOx control technologies for those devices that must be modified. In most instances, this approach will allow multiple units to be tested in one day and provide the necessary operational data the engineer needs to properly evaluate the potential NOx control technologies.

Fundamental Concepts

Reasonably available control technology (RACT) standards for NOx emissions are defined in terms of an emission limit, such as 0.2 lb NOx/MMBtu, rather than mandating Specific NOx control technologies. Depending on the fuel fired and the design of the combustion device, a myriad of control technologies may be viable options. Before selecting RACT for a particular combustion device, it is necessary to understand how NOx emissions are formed so that the appropriate control strategy may be formulated.

NOx emissions formed during the combustion process are a function of the fuel composition, the operating mode, and the basic design of the boiler and combustion equipment. Each of these parameters can play a significant role in the final level of NOx emissions.

NOx formation is attributed to three distinct mechanisms:

1. Thermal NOx Formation;

2. Prompt (i.e.. rapidly forming) NO formation; and

3. Fuel NOx formation.

Each of these mechanisms is driven by three basic parameters – temperature of combustion, time above threshold temperatures in an oxidizing or reducing atmosphere, and turbulence during initial combustion.

Thermal NOx formation in gas-, oil-. and coal-fired devices results from thermal fixation of atmospheric nitrogen in the combustion air. Early investigations of NOx formation were based upon kinetic analyses for gaseous fuel combustion. These analyses by Zeldovich yielded an Arrhenius-type equation showing the relative importance of time, temperature, and oxygen and nitrogen concentrations on NOx formation in a pre-mixed flame (that is, the reactants are thoroughly mixed before combustion).

While thermal NOx formation in combustion devices cannot actually be determined using the Zeldovich relationship, it does illustrate the importance of the major factors that Influence thermal NOx formation, and that NOx formation increases exponentially with combustion temperatures above 2.800°F.

Experimentally measured NOx formation rates near the flame zone are higher than those predicted by the Zeldovich relationship. This rapidly forming NO is referred to as prompt NO. The discrepancy between the predicted and measured thermal NOx values is attributed to the simplifying assumptions used in the derivation of the Zeldovich equation, such as the equilibrium assumption that O = ½ 02. Near the hydrocarbon-air flame zone, the concentration of the formed radicals, such as O and OH, can exceed the equilibrium values, which enhances the rate of NOx formation. However, the importance of prompt NO in NOx emissions is negligible in comparison to thermal and fuel NOx.

When nitrogen is introduced with the fuel, completely different characteristics are observed. The NOx formed from the reaction of the fuel nitrogen with oxygen is termed fuel NOx. The most common form of fuel nitrogen is organically bound nitrogen present in liquid or solid fuels where individual nitrogen atoms are bonded to carbon or other atoms. These bonds break more easily than the diatomic N2 bonds so that fuel NOx formation rates can be much higher than those of thermal NOx. In addition, any nitrogen compounds (e.g., ammonia) introduced into the furnace react in much the same way.

Fuel NOx is much more sensitive to stoichiometry than to thermal conditions. For this reason, traditional thermal treatments, such as flue gas recirculation and water injection, do not effectively reduce NOx emissions from liquid and solid fuel combustion.

NOx emissions can be controlled either during the combustion process or after combustion is complete. Combustion control technologies rely on air or fuel staging techniques to take advantage of the kinetics of NOx formation or introducing inerts that inhibit the formation of NOx during combustion, or both. Post-combustion control technologies rely on introducing reactants in specified temperature regimes that destroy NOx either with or without the use of catalyst to promote the destruction.

Conbustion Control

The simplest of the combustion control technologies is low-excess-air operation–that is, reducing the excess air level to the point of some constraint, such as carbon monoxide formation, flame length, flame stability, and so on. Unfortunately, low-excess-air operation has proven to yield only moderate NOx reductions, if any.

Three technologies that have demonstrated their effectiveness in controlling NOx emissions are off-stoichiometric combustion. low-NOx burners, and combustion temperature reduction. The first two are applicable to all fuels, while the third is applicable only to natural gas and low-nitro-gen-content fuel oils.

Off-stoichiometric, or staged, combustion is achieved by modifying the primary combustion zone stoichiometry – that is, the air/fuel ratio. This may be accomplished operationally or by equipment modifications.

An operational technique known us burners-out-of-service (BOOS) involves terminating the fuel flow to selected burners while leaving the air registers open. The remaining burners operate fuel-rich, thereby limiting oxygen availability, lowering peak flame temperatures, and reducing NOx formation. The unreacted products combine with the air from the terminated-fuel burners to complete burnout before exiting the furnace. Figure 2 illustrates the effectiveness of this technique applied to electric utility boilers. Staged combustion can also be achieved by installing air-only ports, referred to as overfire air (OFA) ports, above the burner zone. redirecting a portion of the air from the burners to the OFA ports. A variation of this concept, lance air, consists of installing air tubes around the periphery of each burner to supply staged air.

BOOS, overfire air, and lance air achieve similar results. These techniques are generally applicable only to larger, multiple-burner, combustion devices.

Low-NOx burners are designed to achieve the staging effect internally. The air and fuel flow fields are partitioned and controlled to achieve the desired air/fuel ratio, which reduces NOx formation and results in complete burnout within the furnace. Low-NOx burners are applicable lo practically all combustion devices with circular burner designs.

Combustion temperature reduction is effective at reducing thermal N0x but not fuel NOx. One way to reduce the combustion temperature is to introduce a diluent. Flue gas recirculation (FGR) is one such technique.

FGR recirculates a portion of the flue gas leaving the combustion process back into the windbox. The recirculated flue gas, usually on the order of 10-20% of the combustion air provides sufficient dilution to decrease NOx emission. Figure 3 correlates the degree of emission reduction with the amount of flue gas recirculated.

On gas-fired units, emissions arc reduced well beyond the levels normally achievable with staged combustion control. In fact, FGR is probably the most effective and least troublesome system for NOx reduction for gas-fired combustors.

An advantage of FGR is that it can be used with most other combustion control methods. Many industrial low-NOx burner systems on the market today incorporate induced FGR. In these designs, a duct is installed between the stack and forced-draft inlet (suction). Flue gas products are recirculated through the forced-draft fan, thus eliminating the need for a separate fan.

Water injection is another method that works on the principle of combustion dilution, very similar to FGR. In addition to dilution, it reduces the combustion air temperature by absorbing the latent heat of vaporization of the water before the combustion air reaches the primary combustion zone.

Few full-scale retrofit or test trials of water injection have been performed. Until recently, water injection has not been used as a primary NOx control method on any combustion devices other than gas turbines because of the efficiency penalty resulting from the absorption of usable energy to evaporate the water. In some cases, water injection represents a viable option to consider when moderate NOx reductions are required to achieve compliance.

Reduction of the air preheat temperature is another viable technique for culling NOx emissions. This lowers peak flame temperatures, thereby reducing NOx formation. The efficiency penalty, however, may be substantial. A rule of thumb is a 1% efficiency loss for each 40º F reduction in preheat. In some cases this may be offset by adding or enlarging the existing economizer.

Post-Combustion Control

There are two technologies for controlling NOx emissions after formation in the combustion process – selective catalytic reduction (SCR) and selective noncatalytic reduction (SNCR). Both of these processes have seen very limited application in the U.S. for external combustion devices. In selective catalytic reduction, a gas mixture of ammonia with a carrier gas (typically compressed air) is injected upstream of a catalytic reactor operating at temperatures between 450º F and 750º F. NOx control efficiencies are typically in the 70-90% percent range, depending on the type of catalyst, the amount of ammonia injected, the initial NOx level, and the age of the catalyst.

The retrofit of SCR on existing combustion devices can be complex and costly. Apart from the ammonia storage, preparation, and control monitoring requirements, significant modifications to the convective pass ducts may be necessary.

In selective noncatalytic reduction, ammonia- or urea-based reagents are injected into the furnace exit region, where the flue gas is in the range of 1,700-2,000º F. The efficiency of this process depends on the temperature of the gas, the reagent mixing with the gas, the residence time within the temperature window, and the amount of reagent injected relative to the concentration of NOx present. The optimum gas temperature for die reaction is about 1,750°F; deviations from this temperature result in a lower NOx reduction efficiency. Application of SNCR, therefore, must be carefully assessed, as its effectiveness is very dependent on combustion device design and operation.

Technology Selection

As noted previously, selection of applicable NOx control technologies depends on a number of fuel, design, and operational factors. After identifying the applicable control technologies, an economic evaluation must be conducted to rank the technologies according to their cost effectiveness. Management can then select the optimum NOx control technology for the specific unit.

It should be noted that the efficiencies of NOx control technologies are not additive, but rather multiplicative. Efficiencies for existing combustion devices have been demonstrated in terms of percent reduction from baseline emissions level. This must be taken into account when considering combinations of technology.

Consider, for example, the following hypothetical case. Assume a baseline NOx emissions level of 100 ppmv and control technology efficiencies as follows: low-excess-air operation (LEA), 10%; low-NOx burners (LNB), 40%; and flue gas recirculation (FGR). 60%. The three controls are installed in the progressive order of LEA-LNB-FGR.

It should also he noted that combining same-principle technologies (for example, two types of staged combustion) would not provide a further significant NOx reduction than either of the combination, since they operate on the same principle.

It must be emphasized that virtually all of the available control technologies have the potential for adversely affecting the performance and/or operation of the unit. The operation data obtained during the NOx characterization testing, therefore, must be carefully evaluated in light of such potential impacts before selecting applicable control technologies. Operational limitations such as flame envelope, furnace pressure, forced-draft fan capacity, and the like must he identified for each potential technology and their corresponding impacts quantified. (Reference (4), for example, discusses these items, in detail.)

As anyone familiar with combustion processes knows, one technology does not fit all. Careful consideration must he used to select the appropriate, compatible control technology or technologies to ensure compliance at least cost with minimal impact on performance, operation, and capacity.


Think Tank and Radio Thoughts on Domestic Technologies for Americans

Welcome to this 21st day of October, 12-years into the 21st century. I wish thank all my online readers and radio listeners for their continued support. For today’s talk I will discuss many items having to do with our technology for domestic purposes; entertainment, safety, education, and personal communication. It all matters and it is changing the way we live, how we think, and our path forward into the future. Indeed, these are all interrelated topics which shouldn’t be necessarily viewed as separate issues in my humble opinion.

Okay so, before we being let me remind you of the format here; I talk and you listen, then it will be your turn to “like” or shout out pro or con with your own opinion – provided that your arguments are not pandering, preaching to the choir or mere talking points of some particular political persuasion – no need to repeat what’s been said elsewhere – for this is the place of original thinking and drilling down into the subject matter which affects us all whether we care to realize it or not. Fair enough? Let’s begin.

Is The Internet Changing the Way We Use and Buy Dictionaries?

Not long ago, I went to the thrift stores nearby to seek out used books. A friend of mine asked me if I could look for a dictionary, something he could use to flip-through perhaps 160,000 plus words, so not a small one, but definitely not a large unabridged version either. Without thinking, I said, “sure, I’ll see what they have,” and then departed for my used book shopping spree for the month. Generally, I find a dozen or so books to read, mostly nonfiction, but I do like everyone have a few fiction series I like to read by my favorite authors.

Due to all the new e-books and e-readers, one thing I’ve noticed is that it’s difficult to find the hardbound books at the used bookstores, or thrift stores before six months after they’ve been published. Previously it was quite easy to do this, but since fewer people are buying hardbound books, and are buying e-books instead, they are not being bought in the numbers they were before. It is quite evident that some of the big box retailers have been challenged by this, that is to say new book sales, but it is also affecting the used book market because people that have e-books aren’t allowed to resell them later. Therefore, it is affecting the hand-me-down market.

Now then, while I was looking for a used dictionary for my friend I found hundreds of them, I couldn’t believe how many there were available. But then again consider this, more and more people are merely typing a word into a search engine which auto corrects spelling, and then lists online dictionaries. Since most people are online all the time, and those who are writing or doing reports for school have the Internet running in the background along with the Google search engine, they merely “google it” and so they no longer need a dictionary at their desk. This is why everyone has donated them to the used bookstores and/or thrift shops.

Do you remember when you were in school and you had a writing assignment, and if you asked your teacher what a word was, she told you to “look it up” because that’s what dictionaries are for. Today, kids are using tablet computers in the classroom for learning, so when they look something up they also look it up online, and therefore this habit will probably follow them well into adulthood. In any case let’s talk about some of the technology in the classroom and how that will also affect the way we learn, think, and solve problems for ourselves in later life.

Technology in The Classroom – What About ADD and ADHD?

There was an interesting article in the science news from a psychologist specializing in learning disorders, she made a very interesting statement; “while videogames do not cause ADD or ADHD, if someone is on the borderline, it’s enough to push them over the edge,” and so, it could be said for the average Internet surfer that spends only 12 to 15 seconds on average on any webpage before clicking out, or going to a different page – that they are at risk of ADD or ADHD?

What we’re doing is we are training the attention span, and diminishing the level of human concentration with all of our technology. If we are to use the same technology in the classroom learning, that may appease the children, or keep high schoolers learning online and doing their assignments perhaps provoking their curiosity with novelty, but what about pushing kids over the edge towards ADD or ADHD? Do you see that point?

What about the challenges with human eyesight? Have you ever spent hours working on a computer project, or doing computer work, and then tried to refocus on something far away, or something very tiny like reading the label on a food package? Have you noticed that you can’t do it, and you have to wait for your eyes to readjust? Much the same as walking into a dark room, it takes a moment to readjust.

There are many challenges with learning that have to do with eyesight, the common ones are; lazy eye and dyslexia, along with kids who are nearsighted who have trouble seeing the chalkboard or viewing the lecturer – likewise there are kids who are farsighted and have a difficult time reading, they are completely challenged. Not only is it embarrassing for them when reading out loud in the classroom, but it often causes them nausea or they get tired easily with a read for over 30 minutes, meaning it is difficult for them to get through their schoolwork. Is our technology causing more of these problems in our schools?

There is an effect, and that effect would not be zero, thus, in effect we are experimenting with the next generation of schoolchildren? Surely, all those that make tablet computers and personal tech devices for education wish to push this technology into the classroom to drive sales and profits. Still, do we really know what we are getting ourselves into?

Further, if the kids can look anything they want up online, they start to trust that device or medium of education technology. We know what happens when adults start believing everything they see on TV, or what happens when folks of a certain political persuasion start reading only things which agree with them and their current POV (point of view) as they become jaded, and mentally boxed-in in their political views.

If people trust what their teachers say, or what they learn in college and there is a socialist or left-leaning slant, we will have more voters leaning that way, likewise, if folks trust the Internet, and there is any amount of filtration of content at the search engines, even by only one or 2% then it is enough to swing an election, and if you swing two or three elections in a row, you will end up with a different country in the future. People often note this problem with the mass media, but have they considered the Internet as it is integrated into our education system – we cannot stop the integration, it’s part of our society, nor should we, but we need to all be cognizant and question not only authority, but the devices which deliver us information, and the software and companies behind them – and their agendas, as they are not ALL purely profit motivated.

Now then, you can definitely see that, right? Just as TV has changed our society in many ways, most of them not for the better, and it has helped people into a bizarre type of consumerism due to branding, advertising, and marketing. Okay so, let’s get back to the political challenges and implications of all this in a few minutes, and instead address the challenges we have with e-commerce, advertising, marketing, branding, and perhaps the unethical side of it all online.

Internet Reviews and The Shrill Factory

Currently, we have a huge problem with Internet reviews. Unfortunately, if a business gets a bad review, or too many complaints and consumers stop shopping there. They trust what they read on the Internet, even though it was written by untrustworthy or unknown sources. Many times it was written by shrills or competitors trying to uplift their ratings while trashing their competitors. Why should this surprise anyone?

It happens all the time in the real world with consumer groups, or nonprofit consumer bureaus. It’s happening online, but unfortunately more and more people trust what they read online, and some people even reason that; it if everyone likes it, that reality will overcome the few negatives written by competitors posting negative comments. Well, one problem we have is that there are companies who will post positive reviews online for a fee by the dozen, 100s or even 1000s. You see that problem; okay, now back to the topic of political indoctrination in our schools.

Pre-Indoctrination Before the Vote – Religion and Socialism in Our Schools

For those who are without religion, they duly note the indoctrination of many world religions in private religious schools, churches, and communities. Folks grow up believing in a certain type of philosophy, or a certain version of history, even to the point that they choose not to look at the fossil record of dinosaurs because it can’t possibly jive with what they’ve been told. Therefore they merely overlook that and hold their same views.

Those who are religious can’t understand why anyone who is nonreligious thinks that everything just was started with some big bang, or why they don’t believe in God. In fact many religious folks wish to convert other people so that they can know the truth, even if they themselves can’t prove it. When asked for proof they simply say; it’s a matter of faith.

When our schools choose to participate in pre-indoctrination they must cover up factual evidence, scientific discovery, and continue to play along with the close minded view of the world. Is that really learning? Is that really teaching our kids to think? Of course, there is significant risk and reason for those who are religious to continue to force their will onto the schools to maintain their numbers and percentages of our population to serve their political will.

Indeed, the other side is just as bad, as there are so many left-leaning and socialist views coming out of our high schools and colleges, that those graduating with college degrees are twice as apt to vote for the left-leaning agenda, even if it goes against basic economics and free-market capitalism which historically has made our country great, whereas socialism has destroyed economies, lives, and entire civilizations, forcing them into bankruptcy.

Now then, it’s great to have tablet computers and personal tech tools and it is possible that they will speed up learning, it also saves trees from being cut down for textbooks I suppose, but nevertheless any new tool which increases learning, also increases the ability to indoctrinate our students, kinds, and family members, and thus the potential to indoctrinate them faster. If these tools are not used in the incorrect way, they will end up pushing a political agenda, and causing problems for our nation.

When these tools are integrated into social networks, which I also have my doubts about, that is to say I am unconvinced they are a net positive for our society, then we could have social engineering and peer pressure used to indoctrinate our children using these tools. Thus they can be used by either side to force their political agenda, while disguising themselves as wonderful teaching technologies. I suppose other future technologies such as holographic simulation will be used to tell of historical events, where the children can see and visualize what happened, such as George Washington crossing the Delaware, and they will feel as if they had been there.

Holographic Simulation Training Strategies a Gargantuan Time and Efficiency Saver

Yes, there will be more comprehension using holographic teaching technologies and it will be easier for the human memory of these children to uptake these facts of history, but not if we rewrite the history, and display it other than its reality. After all, they history we learn is usually one version of what happened, and we don’t know for sure because none of us were there if it happened over 100 years ago. Nevertheless the kids will feel as if they were there, therefore they are more apt to believe whatever is displayed. This becomes a tremendous challenge, and it’s far too easy for one side or the other to push their political agendas.

Still, I am for holographic simulation training, I do believe these strategies will take us further faster into the future, and increase the speed of learning, meaning our children can learn more in a shorter amount of time with better comprehension. That’s a good thing, but not if it’s misused. Humans have always misused the tools they’ve made, and it is human nature to blame, but let’s not be so naïve that educational technologies will not be used for the same.

Rogue regimes and dictators have used indoctrination and so have major religions in the teaching of children. And like I said, when you add in a little bit of peer pressure onto the flock, classroom, troop of soldiers, or population, it’s amazing the damage you can do, if you aren’t an ethical leader, or you don’t have the best intentions of freedom, liberty, and the pursuit of a positive life experience in mind.

Another challenge in school is as we teach these children and have the technology to monitor their progress along the way, we are also going to use data mining to find the bad apples in advance of their disdain for authority, or for finding anomalies, such as those who are most suited towards the indoctrination, those children who have bought the program hook, line, and sinker that can now be groomed for future leadership. That too is a problem. Further, all of this big data, and collection of information of everything that everyone does is going to have challenges in the future.

Not long ago, I was discussing all this with an acquaintance, and we noted that each of us has at times view things on the Internet that we didn’t agree with. My acquaintance had bought a Koran because he believed he should know what he was talking about when it came to Middle Eastern policy. Who could deny that? Myself, I am an aviation buff, and during the Nazi regime they had some of the best aviation technologies and rocketry of the time, they were far advanced, in fact if they Germany still existed today stuck back in that time period, they’d still be one with today’s technology.

But if my friend has libertarian viewpoints and has read the Koran will he become a false positive on some government watch list? Just because I like reviewing the aircraft designs of past periods, does that make me a neo-Nazi sympathizer? It shouldn’t, but that’s the fear we face with too many false positive data triggering events from artificial intelligent algorithms that are quite there yet.

Will we be doing the same thing to our students using educational technologies? After all, curiosity is a good sign of high intelligence, higher learning, and of the creative genius. Something we need in our population to progress as a nation as we move forward with such technologies. Perhaps we could liken this to librarians who are asked to spy on citizens. It just doesn’t seem proper in a nation which prides itself on freedom and liberty.

Just Because You like Something I “Like” Doesn’t Mean I like You

Just because you “plus” something, or “like” something online, doesn’t mean you actually like it or enjoy it, you might despise it, but you find it interesting, want to bookmark it, and you’d like other people to know what you’ve discovered, perhaps they might be equally as disgusted. Speaking of which, you may occasionally like one of the topics I’m talking about, and you may even “like” or “plus” an article or two, but that doesn’t mean you’d like all that I have to say, or that I like you for that matter.

Also, I’d like to point out that if our children use these educational technologies, along with their social networks, that doesn’t mean they won’t change their views in the future as they learn more information. Who knows, our society may change one day and value things that today we think are atrocious, or things today we find atrocious as being valuable. Nevertheless their record will remain forever, for their entire lives. That’s a long time, and the United States has certainly changed in the last eighty years, and we need to be careful with this. Our children are not terrorists, and they shouldn’t be part of some giant experiment to help target individual children in their childhood as being problematic in the future.

That doesn’t mean we don’t need to use technology to help us catch real terrorists, it’s just that we need to be very careful the criteria we use, the algorithms we write, and the formidable challenge of weeding out false positives rather than just placing people on watch lists for no reason. Just because something smells fishy, doesn’t mean that individual is a shark out to hurt society. Speaking of the sense of smell, and the future of technology of scent, I’d like to bring up another point;

Using AI Technology Plus Canine Smell for Added Synergy In Catching Illicit Shipments

University Researchers, Tech Companies, and DARPA have made incredible progress with electronic sniffing devices since Saddam had threatened to use WMD chemical weapons on US forces, and since 911 and afterwards with the anthrax scares. Still, today we use dogs to sniff out drugs more often than not because they are quite evolved to do that. Perhaps, this might be an interesting paper to read:

“Urban Search and Rescue with Canine Augmentation Technology,” by Alexander Ferworn, Alireza Sadeghian, Kevin Barnum, Hossein Rahnama, Huy Pham, Carl Erickson, Devin Ostrom, Lucia Dell’Agnese

The abstract attached to that paper stated;

“The agility, sense of smell, hearing and speed of dogs is put to good use by dedicated canine teams involved in Search and Rescue operations. In comparison to dogs, humans hear less, cannot effectively follow a scent and actually slow the dog down when involved in area searches. To mitigate this problem the Network-Centric Applied Research Team has been working with the Police to augment SAR dogs with supporting technologies to extend the dog’s potential area of operation.”

We do know that humans working with computers and technology tools tend to do better than computers with technology working alone, or humans working without tools. By supplementing the canines with better tools, we can catch more of the drugs, arms, and evil doers coming into our nation over our borders, through our ports, at our airports or by rail or underground tunnel.

Maybe, we ought to employ the technologies we have and use them to complement each other and then use them all together with mankind’s best friend. Does this mean we can 100% protect the American People? Unfortunately not, but it is a solid line of defense, the rest we must do with hyper vigilance and a strong presence of first responders. Who might those first responders be? Yes, let’s talk about that shall we?

Should Motorcycle Cops Get Special Forces Training – Yes, and Let Me Explain

There was an interesting set of articles in our local paper, The Desert Sun (Palm Desert CA), and perhaps you’ve seen similar articles in your own city where motorcycle cops are getting advanced first responder training. Now then, as a former street bike motorcycle racer, I can tell you that a motorcycle in heavy traffic definitely has the best chance of being first to any call, perhaps by as much as 2-3 minutes depending on the location and hour of the day, for instance during rush hour traffic.

Okay so, now you have a motorcycle cop there first, perhaps it was a mass-shooting, perhaps it is even still going on, and you have one police officer on scene who has to engage the criminals, terrorists, or a shooter on his own until back-up arrives, again as much as 2-3 minutes. What if there are multiple shooters?

This is why they will need advanced tactical training, as they may or may not have the advantage of surprise on their side and they need to protect further deaths and take out the bad guy(s). That requires fast thinking, pre-planning, and knowledge to stay alive on an uneven playing field where the bad guys might have them out gunned, which is happening more and more due to automatic weapons in the hands of criminals and drug gangs, and far too many citizens unarmed as a percentage of the population to defend themselves you see?

Interestingly enough, as I am speaking, today in fact there was a horrible shooting in a very nice suburbn area of Milwaukee, Bloomfield WI, where a suspect went into a Day Spa and gunned down three people in cold blood. What if a traffic cop on a motorcycle gets a call from dispatch like that? He rides up and is immediately in a gun-fight? See that point? Now then, remember some of the other shootings, those at schools, workplaces, movie theaters, and government buildings, well, same issues and same challenges. It hardly matters if it is armed bank robbers, gang violence, or a lone-wolf home-grown terrorist.

Now then, the point of all this conversation is quite simple. First, we must not do anything which undermines freedom and liberty, nor should we indoctrinate our citizens to believe that freedom is something it isn’t, or something they cannot attain. It is fine to get everyone on the same page as religion has in the past to help organize society and civilization, but it is not okay to indoctrinate and box- in the minds of our population. We keep stating that we hope to have more entrepreneurship and innovation in the future, but we can’t possibly do that if we indoctrinate our minds into a way of non-thinking.

Okay so, does this mean that more liberty and freedom will open our society up to potential attack from terrorists, criminals, or foreign proxy attack under a false flag? Yes, whenever you have absolute freedom, you risk at least some security. That is why we must use our assets wisely, and not blow money on things that do not work. If we want more efficiency out of our security assets then we must leverage the technology, use it for training, and use technology to our advantage, not to our disadvantage.

Well, that’s it for me talking; now it’s time for you to call in with your suggestions, solutions, and brilliant ideas. If you are viewing this radio transcript online as an Internet article, then please leave your comments below. Now then the rules for commentary are quite simple; you don’t have to agree with me, nor do you have to disagree with me. All you need to do is bring your mind with you when you make a comment and wish to debate one of the subtopics, or have an interesting intellectual idea for our dialogue about our progress forward. Please consider all this and think on it, the phone lines are now open;


History of Wireless Technologies

The development of Wireless technology owes it all to Michael Faraday – for discovering the principle of electromagnetic induction, to James Maxwell – for the Maxwell’s equations and to Guglielmo Marconi – for transmitting a wireless signal over one and a half miles. The sole purpose of Wi-Fi technology is wireless communication, through which information can be transferred between two or more points that are not connected by electrical conductors.

Wireless technologies were in use since the advent of radios, which use electromagnetic transmissions. Eventually, consumer electronics manufacturers started thinking about the possibilities of automating domestic microcontroller based devices. Timely and reliable relay of sensor data and controller commands were soon achieved, which led to the discovery of Wireless communications that we see everywhere now.

History

With the radios being used for wireless communications in the World war era, scientists and inventors started focusing on means to developing wireless phones. The radio soon became available for consumers and by mid 1980s, wireless phones or mobile phones started to appear. In the late 1990s, mobile phones gained huge prominence with over 50 million users worldwide. Then the concept of wireless internet and its possibilities were taken into account. Eventually, the wireless internet technology came into existence. This gave a boost to the growth of wireless technology, which comes in many forms at present.

Applications of Wireless Technology

The rapid progress of wireless technology led to the invention of mobile phones which uses radio waves to enable communication from different locations around the world. The application of wireless tech now ranges from wireless data communications in various fields including medicine, military etc to wireless energy transfers and wireless interface of computer peripherals. Point to point, point to multipoint, broadcasting etc are all possible and easy now with the use of wireless.

The most widely used Wi-Fi tech is the Bluetooth, which uses short wavelength radio transmissions to connect and communicate with other compatible electronic devices. This technology has grown to a phase where wireless keyboards, mouse and other peripherals can be connected to a computer. Wireless technologies are used:

· While traveling

· In Hotels

· In Business

· In Mobile and voice communication

· In Home networking

· In Navigation systems

· In Video game consoles

· In quality control systems

The greatest benefit of Wireless like Wi-Fi is the portability. For distances between devices where cabling isn’t an option, technologies like Wi-Fi can be used. Wi-fi communications can also provide as a backup communications link in case of network failures. One can even use wireless technologies to use data services even if he’s stuck in the middle of the ocean. However, Wireless still have slower response times compared to wired communications and interfaces. But this gap is getting narrower with each passing year.

Progress of Wireless technology

Wireless data communications now come in technologies namely Wi-Fi (a wireless local area network), cellular data services such as GPRS, EDGE and 3G, and mobile satellite communications. Point-to-point communication was a big deal decades ago. But now, point-to-multipoint and wireless data streaming to multiple wirelessly connected devices are possible. Personal network of computers can now be created using Wi-Fi, which also allows data services to be shared by multiple systems connected to the network.

Wireless technologies with faster speeds at 5 ghz and transmission capabilities were quite expensive when they were invented. But now, almost all mobile handsets and mini computers come with technologies like Wi-Fi and Bluetooth, although with variable data transfer speeds. Wireless have grown to such a level, where even mobile handsets can act as Wi-Fi hotspots, enabling other handsets or computers connected to a particular Wi-Fi hotspot enabled handset, can share cellular data services and other information. Streaming audio and video data wirelessly from the cell phone to a TV or computer is a walk in the park now.

Wireless Technology today, are robust, easy to use, and are portable as there are no cables involved. Apart from local area networks, even Metropolitan Area networks have started using Wi-fi tech (WMAN) and Customer Premises Equipment ( CPE ). Aviation, Transportation and the Military use wireless technologies in the form of Satellite communications. Without using interconnecting wires, wireless technologies are also used in transferring energy from a power source to a load, given that the load doesn’t have a built-in power source.

However, the fact that ‘nothing comes without a drawback’ or ‘nothing is perfect’ also applies to Wi-fi technology. Wireless technologies still have limitations, but scientists are currently working on it to remove the drawbacks and add to the benefits. The main limitation is that Wireless technologies such as Bluetooth and Wi-Fi can only be used in a limited area. The wireless signals can be broadcasted only to a particular distance. Devices outside of this range won’t be able to use Wi-Fi or Bluetooth. But the distance limitation is becoming reduced every year. There are also a few security limitations which hackers can exploit to cause harm in a wireless network. But Wireless technologies with better security features have started to come out. So this is not going to be a problem for long.

Speaking of progress, Wi-Fi technology is not limited to powerful computers and mobile handsets. The technology has progressed enough that Wi-Fi enabled TVs and microwaves have started appearing in the markets. The latest and the most talked-about wireless technology is the NFC or Near Field Communication, which lets users exchange data by tapping their devices together. Using wireless technologies are not as expensive as it used to be in the last decade. With each passing year, newer and better wireless technologies arrive with greater benefits.


Leaping Into the 6th Technology Revolution

We’re at risk of missing out on some of the most profound opportunities offered by the technology revolution that has just begun.

Yet many are oblivious to the signs and are in danger of watching this become a period of noisy turmoil rather than the full-blown insurrection needed to launch us into a green economy. What we require is not a new spinning wheel, but fabrics woven with nanofibers that generate solar power. To make that happen, we need a radically reformulated way of understanding markets, technology, financing, and the role of government in accelerating change. But will we understand the opportunities before they disappear?

Seeing the Sixth Revolution for What It Is

We are seven years into the beginning of what analysts at BofA Merrill Lynch Global Research call the Sixth Revolution. A table by Carlotta Perez, which was presented during a recent BofA Merrill Lynch Global Research luncheon hosted by Robert Preston and Steven Milunovich, outlines the revolutions that are unexpected in their own time that lead to the one in which we find ourselves.

  • 1771: Mechanization and improved water wheels
  • 1829: Development of steam for industry and railways
  • 1875: Cheap steel, availability of electricity, and the use of city gas
  • 1908: Inexpensive oil, mass-produced internal combustion engine vehicles, and universal electricity
  • 1971: Expansion of information and tele-communications
  • 2003: Cleantech and biotech

The Vantage of Hindsight

Looking back at 1971, we know that Intel’s introduction of the microprocessor marked the beginning of a new era. But in that year, this meant little to people watching Mary Tyler Moore and The Partridge Family, or listening to Tony Orlando & Dawn and Janis Joplin. People would remember humanity’s first steps on the Moon, opening relations between US and China, perhaps the successful completion of the Human Genome Project to 99.99% accuracy, and possibly the birth of Prometea, the first horse cloned by Italian scientists.

According to Ben Weinberg, Partner, Element Partners, “Every day, we see American companies with promising technologies that are unable to deploy their products because of a lack of debt financing. By filling this gap, the government will ignite the mass deployment of innovative technologies, allowing technologies ranging from industrial waste heat to pole-mounted solar PV to prove their economics and gain credibility in the debt markets.”

Flying beneath our collective radar was the first floppy disk drive by IBM, the world’s first e-mail sent by Ray Tomlinson, the launch of the first laser printer by Xerox PARC and the Cream Soda Computer by Bill Fernandez and Steve Wozniak (who would found the Apple Computer company with Steve Jobs a few years later).

Times have not changed that much. It’s 2011 and many of us face a similar disconnect with the events occurring around us. We are at the equivalent of 1986, a year on the cusp of the personal computer and the Internet fundamentally changing our world. 1986 was also the year that marked the beginning of a major financial shift into new markets. Venture Capital (VC) experienced its most substantial finance-raising season, with approximately $750 million, and the NASDAQ was established to help create a market for these companies.

Leading this charge was Kleiner Perkins Caulfield & Beyers (KPCB), a firm that turned technical expertise into possibly the most successful IT venture capital firm in Silicon Valley. The IT model looked for a percentage of big successes to offset losses: an investment like the $8 million in Cerent, which was sold to Cisco Systems for $6.9 billion, could make up for a lot of great ideas that didn’t quite make it.

Changing Financial Models

But the VC model that worked so well for information and telecommunications doesn’t work in the new revolution. Not only is the financing scale of the cleantech revolution orders of magnitude larger than the last, this early in the game even analysts are struggling to see the future.

Steven Milunovich, who hosted the BofA Merrill Lynch Global Research lunch, remarked that each revolution has an innovation phase which may last for as long as 25 years, followed by an implementation phase of another 25. Most money is made in the first 20 years, so real players want to get in early. But the question is: Get in where, for how much and with whom?

There is still market scepticism and uncertainty about the staying power of the clean energy revolution. Milunovich estimates that many institutional investors don’t believe in global warming, and adopt a “wait and see” attitude complicated by government impasse on energy security legislation. For those who are looking at these markets, their motivation ranges from concerns about oil scarcity, supremacy in the “new Sputnik” race, the shoring up of homeland security and – for some – a concern about the effects of climate change. Many look askance at those who see that we are in the midst of a fundamental change in how we produce and use energy. Milunovich, for all these reasons, is “cautious in the short term, bullish on the long.”

The Valley of Death

Every new technology brings with it needs for new financing. In the sixth revolution, with budget needs 10 times those of IT, the challenge is moving from idea to prototype to commercialization. The Valley of Death, as a recent Bloomberg New Energy Finance whitepaper, Crossing the Valley of Death pointed out, is the gap between technology creation and commercial maturity.

But some investors and policy makers continue to hope that private capital will fuel this gap, much as it did the last. They express concern over the debt from government programs like the stimulus funds (American Recovery and Reinvestment Act) which have invested millions in new technologies in the clean energy sector, as well as helping states with rebuilding infrastructure and other projects. They question why the traditional financing models, which made the United States the world leader in information technology and telecommunications, can’t be made to work today, if the Government would just get out of the way.

But analysts from many sides of financing believe that government support, of some kind, is essential to move projects forward, because cleantech and biotech projects require a much larger input of capital in order to get to commercialization. This gap not only affects commercialization, but is also affecting investments in new technologies, because financial interests are concerned that their investment might not see fruition – get to commercial scale.

How new technologies are radically different from the computer revolution.

Infrastructure complexity

This revolution is highly dependent on an existing – but aging – energy infrastructure. Almost 40 years after the start of the telecommunications revolution, we are still struggling with a communications infrastructure that is fragmented, redundant, and inefficient. Integrating new sources of energy, and making better use of what we have, is an even more complex – and more vital – task.

According to “Crossing the Valley of Death,” the Bloomberg New Energy Finance Whitepaper,

“The events of the past few years confirm that it is only with the public sector’s help that the Commercialization Valley of Death can be addressed, both in the short and the long term. Only public institutions have ‘public benefits’ obligations and the associated mandated risk-tolerance for such classes of investments, along with the capital available to make a difference at scale. Project financiers have shown they are willing to pick up the ball and finance the third, 23rd, and 300th project that uses that new technology. It is the initial technology risk that credit committees and investment managers will not tolerate.”

Everything runs on fuel and energy, from our homes to our cars to our industries, schools, and hospitals. Most of us have experienced the disconnect we feel when caught in a blackout: “The air-conditioner won’t work so I guess I’ll turn on a fan,” only to realize we can’t do either. Because energy is so vital to every aspect of our economy, federal, state and local entities regulate almost every aspect of how energy is developed, deployed, and monetized. Wind farm developers face a patchwork quilt of municipal, county, state and federal regulations in getting projects to scale.

Incentives from government sources, as well as utilities, pose both an opportunity and a threat: the market rises and falls in direct proportion to funding and incentives. Navigating these challenges takes time and legal expertise: neither of which are in abundant supply to entrepreneurs.

Development costs

Though microchips are creating ever-smaller electronics, cleantech components – such as wind turbines and photovoltaics – are huge. They can’t be developed in a garage, like Hewlett and Packard’s first oscilloscope. A new generation of biofuels that utilizes nanotechnology isn’t likely to take place out of a dorm room, as did Michael Dell’s initial business selling customized computers. What this means for sixth revolution projects is that they have much larger funding needs, at much earlier stages.

Stepping up and supporting innovation, universities – and increasingly corporations – are partnering with early stage entrepreneurs. They are providing technology resources, such as laboratories and technical support, as well as management expertise in marketing, product development, government processes, and financing. Universities get funds from technology transfer arrangements, while corporations invest in a new technologies, expanding their product base, opening new businesses, or providing cost-benefit and risk-analysis of various approaches.

But even with such help, venture capital and other private investors are needed to augment costs that cannot be born alone. These investors look to some assurance that projects will produce revenue in order to return the original investment. So concerns over the Valley of Death affects even early stage funding.

Time line to completion

So many of us balk at two year contracts for our cell phones that there is talk of making such requirements illegal. But energy projects, by their size and complexity, look out over years, if not decades. Commercial and industrial customers look to spread their costs over ten to twenty years, and contracts cover contingencies like future business failure, the sale of properties, or the prospect of renovations that may affect the long term viability of the original project.

Kevin Walsh, managing director and head of Power and Renewable Energy at GE Energy Financial Services states, “GE Energy Financial Services supports the creation of CEDA or a similar institution because it would expand the availability of low-cost capital to the projects and companies in which we invest, and it would help expand the market for technology supplied by other GE businesses.”

Michael Holman, analyst for Lux Research, noted that a $25 million investment in Google morphed into $1.7 billion 5 years later. In contrast, a leading energy storage company started with a $300 million investment, and 9 years later valuation remains uncertain. These are the kinds of barriers that can stall the drive we need for 21st century technologies.

Looking to help bridge the gap in new cleantech and biotech projects, is a proposed government-based solution called the Clean Energy Deployment Administration (CEDA). There is a house and senate version, as well as a house Green Bank bill to provide gap financing. Recently, over 42 companies, representing many industries and organizations, signed a letter to President Obama, supporting the Senate version, the “21st Century Energy Technology Deployment Act.”

Both the house and senate bills propose to create, as an office within the US Department of Energy (DOE), an administration which would be tasked with lending to risky cleantech projects for the purpose of bringing new technologies to market. CEDA would be the bridge needed to ensure the successful establishment of the green economy, by partnering with private investment to bring the funding needed to get these technologies to scale. Both versions capitalize the agency with $10 Billion (Senate) and $7.5 Billion (House), with an expected 10% loss reserve long term.

By helping a new technology move more effectively through the pipeline from idea to deployment, CEDA can substantially increase private sector investment in energy technology development and deployment. It can create a more successful US clean energy industry, with all the attendant economic and job creation benefits.

Who Benefits?

CEDA funding could be seen as beneficial for even the most unlikely corporations. Ted Horan is the Marketing and Business Development Manager for Hycrete, a company that sells a waterproof concrete. Hardly a company that springs to mind when we think about clean technologies, he recently commented on why Hycrete CEO, Richard Guinn, is a signatory on the letter to Obama:

“The allocation of funding for emerging clean energy technologies through CEDA is an important step in solving our energy and climate challenges. Companies on the cusp of large-scale commercial deployment will benefit greatly and help accelerate the adoption of clean energy practices throughout our economy.”

In his opinion, the manufacturing and construction that is needed to push us out of a stagnating economy will be supported by innovation coming from the cleantech and biotech sectors.

Google’s Dan Reicher, Director of Climate Change and Energy Initiatives, has been a supporter from the inception of CEDA. He has testified before both houses of Congress, and was a signatory on the letter to President Obama. Google’s interest in clean and renewable energies dates back several years. The company is actively involved in projects to cut costs of solar thermal and expand the use of plug-in vehicles, and has developed the Power Meter, a product which brings home energy management to anyone’s desktop-for free.

Financial support includes corporations like GE Energy Financial Services, Silicon Valley Venture Capital such as Kleiner, Perkins Caulfiled and Byers, and Mohr Davidow Ventures, and Energy Capital including Hudson Clean Energy and Element Partners.Can something like the senate version of CEDA leap the Valley of Death?

As Will Coleman from Mohr Davidow Ventures, said, “The Devil’s in the details.” The Senate version has two significant changes from previous proposals: an emphasis on breakthrough as opposed to conventional technologies, and political independence.

Neil Auerbach, Managing Partner, Hudson Clean Energy

The clean energy sector can be a dynamic growth engine for the US economy, but not without thoughtful government support for private capital formation. **[Government policy] promises to serve as a valuable bridging tool to accelerate private capital formation around companies facing the challenge, and can help ensure that the US remains at the forefront of the race for dominance in new energy technologies.

Breakthrough Technologies

Coleman said that “breakthrough” includes the first or second deployment of a new approach, not just the game changing science-fiction solution that finally brings us limitless energy at no cost. The Bloomberg New Energy white paper uses the term “First of Class.” Bringing solar efficiency up from 10% to 20%, or bringing manufacturing costs down by 50%, would be a breakthrough that would help us begin to compete with threats from China and India. Conventional technologies, those that are competing with existing commercialized projects, would get less emphasis.

Political Independence

Political independence is top of mind for many who spoke or provided an analysis of the bill. Michael Holman, analyst at Lux Research, expressed the strongest concerns that CEDA doesn’t focus enough on incentives to bring together innovative start-ups with larger established firms.

“The government itself taking on the responsibility of deciding what technologies to back isn’t likely to work-it’s an approach with a dreadful track record. That said, it is important for the federal government to lead – the current financing model for bringing new energy technologies to market is broken, and new approaches are badly needed.”

For many, the senate bill has many advantages over the house bill, in providing for a decision making process that includes technologists and private sector experts.

“I think both sides [of the aisle] understand this is an important program, and must enable the government to be flexible and employ a number of different approaches. The Senate version empowers CEDA to take a portfolio approach and manage risk over time, which I think is good. In the House bill, CEDA has to undergo the annual appropriation process, which runs the risk of politicizing every investment decision in isolation and before we have a chance to see the portfolio mature.” – Will Coleman, Mohr Davidow.

Michael DeRosa, Managing Director of Element Partners added,

“The framework must ensure the selection of practical technologies, optimization of risk/return for taxpayer dollars, and appropriate oversight for project selection and spending. **Above all, these policies must be designed with free markets principles in mind and not be subject to political process.”

If history is any indication, rarely are those in the middle of game-changing events aware of their role in what will one day be well-known for their sweeping influence. But what we can see clearly now is the gap between idea and commercial maturity. CEDA certainly offers some hope that we may yet see the cleantech age grow up into adulthood. But will we act quickly enough before all of the momentum and hard work that has brought us this far falls flat as other countries take leadership roles, leaving us in the dust?

THE GREEN ECONOMY is an information company, providing timely, credible facts and analyses on companies adapting to meet the challenges of a green future.

Markets are in transition; customers are demanding a higher quality of life, such as clean water and energy. These pressures are affecting commodity prices, access to markets, the nature of innovation and more. At the same time, infrastructure (water, energy, transportation), is becoming more – not less – localized. These changes mean opportunities and demand new partnerships to deliver increasingly complex solutions. THE GREEN ECONOMY tells those stories.


Technology and Our Kids

With most people plugged in all the time, I often wonder what effect technology is having on our kids. Some say technology is another helpful learning tool that is making our kids smarter and some say it is having no significant effect at all. Still, others propose that technology use is encouraging social isolation, increasing attentional problems, encouraging unhealthy habits, and ultimately changing our culture and the way humans interact. While there isn’t a causal relationship between technology use and human development, I do think some of the correlations are strong enough to encourage you to limit your children’s screen time.

Is television really that harmful to kids? Depending on the show and duration of watching, yes. Researchers have found that exposure to programs with fast edits and scene cuts that flash unrealistically across the screen are associated with the development of attentional problems in kids. As the brain becomes overwhelmed with changing stimuli, it stops attending to any one thing and starts zoning out. Too much exposure to these frenetic programs gives the brain more practice passively accepting information without deeply processing it. However, not all programs are bad. Kids who watch slow paced television programs like Sesame Street are not as likely to develop attentional problems as kids who watch shows like The Power Puff Girls or Johnny Neutron. Educational shows are slow paced with fewer stimuli on the screen which gives children the opportunity to practice attending to information. Children can then practice making connections between new and past knowledge, manipulating information in working memory, and problem solving. Conclusively, a good rule of thumb is to limit television watching to an hour to two hours a day, and keep an eye out for a glossy-eyed transfixed gaze on your child’s face. This is a sure sign that his or her brain has stopped focusing and it is definitely time to shut off the tube so that he can start thinking, creating, and making sense out of things again (all actions that grow rather than pacify the brain).

When you do shut off the tube, don’t be surprised if you have a melt down on your hands. Technology has an addictive quality because it consistently activates the release of neurotransmitters that are associated with pleasure and reward. There have been cases of addictions to technology in children as young as four-years-old. Recently in Britain, a four-year-old girl was put into intensive rehabilitation therapy for an iPad addiction! I’m sure you know how rewarding it is to sign onto Facebook and see that red notification at the top of the screen, or even more directly how rewarding playing games on your computer can be as you accumulate more “accomplishments.” I am guilty of obsessive compulsively checking my Facebook, email, and blog throughout the day. The common answer to this problems is, “All things in moderation.” While I agree, moderation may be difficult for children to achieve as they do not possess the skills for self discipline and will often take the easy route if not directed by an adult. According to a new study by the Kaiser Family Foundation, children spend about 5 hours watching television and movies, 3 hours on the internet, 1 1/2 hours texting on the phone, and a 1/2 hour talking on the phone each day. That’s almost 75 hours of technology use each week, and I am sure these results are mediated by parental controls and interventions. Imagine how much technology children use when left to their own defenses! In a recent Huffington Post article, Dr. Larry Rosen summed it up well, “… we see what happens if you don’t limit these active participation. The child continues to be reinforced in the highly engaging e-world, and more mundane worlds, such as playing with toys or watching TV, pale in comparison.” How are you ever going to get your child to read a black and white boring old book when they could use a flashy, rewarding iPad instead? Children on average spend 38 minutes or less each day reading. Do you see a priority problem here?

With such frequent technology use, it is important to understand if technology use encourages or discourages healthy habits. It’s reported that among heavy technology users, half get C’s or lower in school. Light technology users fair much better, only a quarter of them receiving low marks. There are many factors that could mediate the relationship between technology use and poor grades. One could be decreased hours of sleep. Researchers from the Department of Family and Community Health at the University of Maryland found that children who had three or more technological devices in their rooms got at least 45 minutes less sleep than the average child the same age. Another could be the attention problems that are correlated with frequent technology use. Going further, multitasking, while considered a brilliant skill to have on the job, is proving to be a hindrance to children. It is not uncommon to see a school aged child using a laptop, cell phone, and television while trying to also complete a homework assignment. If we look closer at the laptop, we might see several tabs opened to various social networks and entertainment sites, and the phone itself is a mini computer these days. Thus, while multitasking, children are neglecting to give their studies full attention. This leads to a lack of active studying, a failure to transfer information from short term to long term memory, which leads ultimately to poorer grades in school. Furthermore, it is next to impossible for a child to engage is some of the higher order information processing skills such as making inferences and making connections between ideas when multitasking. We want our children to be deep thinkers, creators, and innovators, not passive information receptors who later regurgitate information without really giving it good thought. Therefore, we should limit access to multiple devices as well as limit duration of use.

Age comes into play when discussing the harmful effects of technology. For children younger than two-years-old, frequent exposure to technology can be dangerously detrimental as it limits the opportunities for interaction with the physical world. Children two-years-old and younger are in the sensorimotor stage. During this stage it is crucial that they manipulate objects in the world with their bodies so that they can learn cause-effect relationships and object permanence. Object permanence is the understanding that when an object disappears from sight, it still exists. This reasoning requires the ability to hold visual representations of objects in the mind, a precursor to understanding visual subjects such as math later in life. To develop these skills, children need several opportunities every day to mold, create, and build using materials that do not have a predetermined structure or purpose. What a technological device provides are programs with a predetermined purpose that can be manipulated in limited ways with consequences that often don’t fit the rules of the physical world. If the child is not being given a drawing app or the like, they are likely given programs that are in essence a lot like workbooks with structured activities. Researchers have found that such activities hinder the cognitive development of children this age. While researchers advise parents to limit their baby’s screen time to 2 hours or less each day, I would say it’s better to wait to introduce technology to your children until after they have at least turned 3-years-old and are demonstrating healthy cognitive development. Even then, technology use should be limited enormously to provide toddlers with time to engage in imaginative play.

Technology is changing the way children learn to communicate and use communication to learn. Many parents are using devices to quiet there children in the car, at the dinner table, or where ever social activities may occur. The risk here is that the child is not witnessing and thinking about the social interactions playing out before him. Children learn social skills through modeling their parents social interactions. Furthermore, listening to others communicate and talking to others is how children learn to talk to themselves and be alone. The benefits of solitude for children come from replaying and acting out conversations they had or witnessed during the day, and this is how they ultimately make sense of their world. The bottom line is, the more we expose our children to technological devices, the worse their social skills and behavior will be. A Millennium Cohort Study that followed 19,000 children found that, “those who watched more than three hours of television, videos or DVDs a day had a higher chance of conduct problems, emotional symptoms and relationship problems by the time they were 7 than children who did not.” If you are going to give your child screen privileges, at least set aside a time for just that, and don’t use technology to pacify or preoccupy your children during social events.

There’s no question that technology use can lead to poor outcomes, but technology itself is not to blame. Parents need to remember their very important role as a mediator between their children and the harmful effects of technology. Parents should limit exposure to devices, discourage device multitasking, make sure devices are not used during social events, and monitor the content that their child is engaging in (ie. Sesame Street vs. Johnny Neutron). Technology can be a very good learning tool, but children also need time to interact with objects in the real world, engage in imaginative play, socialize face-to-face with peers and adults, and children of all ages need solitude and time to let their mind wander. We need to put more emphasis on the “Ah-ha!” moment that happens when our minds are free of distractions. For this reason alone, technology use should be limited for all of us.


3 Steps To Identify Most Appropriate Travel Technology Solution For Your Business

Over the last 10 years, the travel business scenario has changed significantly. Today selling travel products is all about ‘best’ rates. To sustain in the battle to offer the ‘best deal’ and ‘best fare’ to the consumers, travel business owners have been forced to reduce almost all of their possible profit margins.

I still remember when a service fee of $6 was a norm across online sales of air tickets. Commissions and contracts were available to travel agents. Cancellation fee on hotels were healthy.

The emergence of large online travel agencies changed the rules of the business across the globe. Fuel prices and global economic conditions added to the challenges of earning healthy margins. Travel became the most competitive business. Commissions dried up. Segment fees reduced and “no fee” became the new best seller.

On the Travel Technology side, along with successful implementations, I have heard stories of many failures where travel businesses were not able to derive what they wanted from technology. Most of the time the key reasons for failure has been:

Over ambitious technology goal on a constrained budget Lack of ‘competitive’ Travel Technology expertise Poor IT team and management, suffering from ‘over promise’ and ‘under deliver’ In this ecosystem, how could a travel business set about defining an effective Technology Strategy for itself?

As a travel technologist, I have many motivations to say “buy my software”, but in my experience that’s not a good pitch. After carefully analyzing various successes and failures in the industry, here is what I feel I have learned:

Step 1: Identify what Travel Technology you need

Well, it is easier said than done. Most of the time not articulating the technology needs well is the biggest hurdle in Technology Strategy. As a travel business, here is what you could do to clearly articulate the need for technology.

Pen down the technology needs of the organization as envisioned by the business owner / key management personnel Consult with people external to the organization such as technology consultants, Travel Technology companies, GDS account managers, CRS / Suppliers and Travel Technology bloggers Let a technology company interview you and recommend a solution. This is generally free most of the times. Pursuing one or more of these three exercises diligently will build enough knowledge base about what your internal Technology Strategy should be. Identify and validate these thoughts with inputs from internal operations and marketing teams.

Step 2: Build vs. Buy?

This is considered the most complex question. The answer lies in dividing Travel Technology needs in three buckets.

Proprietary

Customized

Out of the Box

What is proprietary?

It is important to identify your differentiator as a travel business. Most of the time, proprietary defines a piece of technology which reduces OPEX corresponding to your business operations or is the biggest revenue generator corresponding to your business model.

What is a customized need?

Is there any part of your technology needs that could be sourced through an existing technology solution, customized per your need?

What can be out of the box?

This might be the most effort intensive part of your technology needs and may require a tremendous investment to build. Getting an out of the box solution that meets the majority of your requirements and configuring it as per your needs, is the ideal way. How to evaluate an out of the box solution is in itself a comprehensive process.

Now we come to the next complex part of this exercise.

Step 3: Identify the right budget and vendor

Identifying the right budget and the vendor is the most common shopping problem in every business sector. It takes a lot of time and energy to reach to a decision.

Let’s compare technology acquisition to the decision of buying a laptop. There are many vendors to choose from. There are laptops priced from $300 to $3000. Your decision to buy would be shaped by the life of the laptop, and the continuity of business (your work) it will guarantee.

Similarly, the continuity of your travel business would significantly depend on the Travel Technology you choose. That is why identifying the right budget, and the vendor is a complex decision.

I would attempt to breakdown the process of identifying a vendor into simpler steps since just asking a vendor for a quote would not necessarily help find the right one.

Expertise – Does the vendor has expertise in the travel business?

Support & Servicing – Travel is a service business. Irrespective of whether the product is ‘off the shelf’ or is being built for you, longevity and promptness of support is critically important to maintain a personalized quality of service to your customers.

Customization needed vs. Customizability -What is the future customizability of the software? (Applicable to both out of the box or custom built software) Whether customization done today decrease future cost of changing the technology? This is an important question to ask and seek answers to.

Value Add – Another important evaluation parameter for selecting a vendor is to check what part /component of the software is available free of cost and would remain so in the future.

Stability – Your guarantee of service to your customers depends on the stability of your vendor. It is important to seek answers to questions such as is the vendor going to be in business for long? How are you safeguarded if a vendor goes out of business?

References – Who are the customers of the vendor? Can the vendor provide references?

Maturity – Is the vendor’s organization a product oriented and innovation driven institution or do they survive by making money from one gig to another?

Empathy – Does the vendor considers your business as their own? How willing is the vendor to empathize with your business challenges?

Budgeting for technology is also a little challenging. It may be worthwhile to look beyond the onetime fee and understand all cost factors, including the cost of extended support the vendor may provide during your business life-cycle.

Cost should also include additional overheads of implementing technology, especially when you are dealing with GDS or CRS / Consolidators. Budgeting done in partnership with a selected vendor often yields the best results.


Technology Management Graduate Studies

The increasing importance of technology in every industry continues to drive the need for a diverse group of qualified professionals to manage the implementation and changes in technology. Pursuing a degree at a technology management graduate school can be the right step for beginning a rewarding career in the management of everything from computer hardware to information security within an organization.

Overview of Technology Management

Technology management professionals are in high demand because of the unique set of skills they possess. In this field, professionals are able to make leadership and management based decisions, develop solutions to technology issues, and approach the management of technology from a systems thinking perspective.

For any management professionals, some of the skills that are required include being able to manage personnel, organizational design and communication, and financial analysis and decision making. Technology management professionals combine this knowledge with specific information technology and systems technology skills and knowledge to effectively lead and make decisions for the assessment, forecasting, strategies, and decision making with a number of different information technology departments.

Technology Management Graduate Degree Curriculum

There are a number of technology_management graduate school choices for prospective students. While there are differences depending on the individual program and school, students most often complete a set of core courses, electives, and a graduate program in order to complete the graduate degree. This combination helps to prepare graduates to transfer relevant, useful skills into the workforce.

From graduate level courses in technology to business, students are able to learn a variety of skills and gain valuable knowledge. Some courses in technology often included information technology_management, operations, emerging technologies, and ethics. Additionally, students will take business and management courses such as supply chain management, sales and marketing, and accounting for technology.

These courses give students the opportunity to gain a broad foundation to develop an understanding of the basic fundamentals of technology management. The electives and the master’s project build on that foundation to help students begin to focus their education on a specific area of technology_management. Some examples of electives include knowledge management and relationship management. The master’s level project combines the knowledge, theory, and skill a graduate student has gained though academic coursework to examine how that ability can be transferred to a real-world, challenging business issue or problem in order to find a solution or manage a specific scenario.

Career Development with a Technology Management Graduate Degree

Technology professionals must develop a variety of skills. In addition to understanding information technology, professionals in this field must also be able to manage change with technology and technology systems, integrate functional areas of business, leveraging technology, and business management principles to effectively lead the technology driven functions of a business.

These skills are needed in many different types of positions across all types of workplaces, from the federal government to non-profit and educational organizations to private corporations. From the chief information officer to information technology manager, a degree in technology_management is a helpful tool to gain the experience and skills needed for all types of management positions of technology-driven departments.


How Much Is “Information Technology Debt” Hurting Your Bottom-Line?

Information Technology (IT) debt is basically the cost of maintenance needed to bring all applications up to date.

Shockingly, global “Information Technology (IT) debt” will reach $500 billion this year and could rise to $1 trillion by 2015!

But why should you take IT debt seriously and begin to take steps to eliminate this issue from your business?

According to Gartner, the world’s leading information technology research and advisory company…

It will cost businesses world-wide 500 billion dollars to “clear the backlog of maintenance” and reach a fully supported current technology environment.

Gartner summarizes the problem best:

“The IT management team is simply never aware of the time scale of the problem.This problem, hidden from sight, is getting bigger every year and more difficult to deal with every year.”

The true danger is that systems get out of date which leads to all kinds of costly software and hardware inefficiencies.

Your tech support provider can most likely do a better job at staying current with your computer and network environment.

Have them start today by documenting the following:

  • The number of applications in use
  • The number purchased
  • The number failed
  • The current and projected costs of both operating and improving their reliability

Are you using this powerful formula to control your technology?

There’s a powerful formula I’ll share with you in a moment that will help you adopt new technology faster in your business.

In business, technology encompasses Information Technology (IT), Phone Systems and Web Development.

These three layers of technology form the backbone of your business’s technology environment. Why is technology adoption so important?

Without new technology adoption it’s impossible for businesses to be competitive in this economy. A major role of technology is to help businesses scale, design systems, and automate processes.

Studies recently have shown that adopting technology keeps businesses leaner because entrepreneurs can do more with less.

There’s evidence that new business start-ups are doing so with nearly half as many workers as they did a decade ago.

For example, Wall Street Journal’s Angus Loten reported that today’s start-ups are now being launched with an average of 4.9 employees.

Down from 7.5 in the 1990s, according to the Ewing Marion Kauffman Foundation, a Kansas City Research group.

In other words, technology allows businesses to expand quickly with less.

Researchers at Brandeirs University found that technology driven service businesses added jobs at a rate of 5.1% from 2001 to 2009; while employment overall dwindled by.5%.

These businesses save money, expand, and create jobs by adopting new technologies.

Are you adopting new technologies fast in your business?

Speed of technology adoption is critical to your business success.

Technology is changing the speed of business; now a whole industry might expand, mature, and die in months… not years.

There’s one formula that illustrates this marriage between adopting technology and business success the best… and that’s the “Optimal Technology Equation.”

I recommend you adopt this powerful “Optimal Technology Equation” in your business:

• Maintenance + Planning + Innovation (Adoption)=
• Enhanced Technology Capabilities=
• Reduced Costs + Increased Production=
• Increased Profitability.

Of course, this is only a brief explanation of this invaluable formula. Be one step ahead of the competition.


Impact of New Technologies by 2030

According to the 2012 report, Global Trends 2030: Alternative Worlds, published the US National Intelligence Council, four technology arenas will shape global economic, social and military developments by 2030. They are information technologies, automation and manufacturing technologies, resource technologies, and health technologies.

Information technologies

Three technological developments with an IT focus have the power to change the way we will live, do business and protect ourselves before 2030.

1. Solutions for storage and processing large quantities of data, including “big data”, will provide increased opportunities for governments and commercial organizations to “know” their customers better. The technology is here but customers may object to collection of so much data. In any event, these solutions will likely herald a coming economic boom in North America.

2. Social networking technologies help individual users to form online social networks with other users. They are becoming part of the fabric of online existence, as leading services integrate social functions into everything else an individual might do online. Social networks enable useful as well as dangerous communications across diverse user groups and geopolitical boundaries.

3. Smart cities are urban environments that leverage information technology-based solutions to maximize citizens’ economic productivity and quality of life while minimizing resources consumption and environmental degradation.

Automation and manufacturing technologies

As manufacturing has gone global in the last two decades, a global ecosystem of manufacturers, suppliers, and logistics companies has formed. New manufacturing and automation technologies have the potential to change work patterns in both the developed and developing worlds.

1. Robotics is today in use in a range of civil and military applications. Over 1.2 million industrial robots are already in daily operations round the world and there are increasing applications for non-industrial robots. The US military has thousands of robots in battlefields, home robots vacuum homes and cut lawns, and hospital robots patrol corridors and distribute supplies. Their use will increase in the coming years, and with enhanced cognitive capabilities, robotics could be hugely disruptive to the current global supply chain system and the traditional job allocations along supply chains.

2. 3D printing (additive manufacturing) technologies allow a machine to build an object by adding one layer of material at a time. 3D printing is already in use to make models from plastics in sectors such as consumers products and the automobile and aerospace industries. By 2030, 3D printing could replace some conventional mass production, particularly for short production runs or where mass customization has high value.

3. Autonomous vehicles are mostly in use today in the military and for specific tasks e.g. in the mining industry. By 2030, autonomous vehicles could transform military operations, conflict resolution, transportation and geo-prospecting, while simultaneously presenting novel security risks that could be difficult to address. At the consumer level, Google has been testing for the past few years a driverless car.

Resource technologies

Technological advances will be required to accommodate increasing demand for resources owing to global population growth and economic advances in today’s underdeveloped countries. Such advances can affect the food, water and energy nexus by improving agricultural productivity through a broad range of technologies including precision farming and genetically modified crops for food and fuel. New resource technologies can also enhance water management through desalination and irrigation efficiency; and increase the availability of energy through enhanced oil and gas extraction and alternative energy sources such as solar and wind power, and bio-fuels. Widespread communication technologies will make the potential effect of these technologies on the environment, climate and health well known to the increasingly educated populations.

Health technologies

Two sets of health technologies are highlighted below.

1. Disease management will become more effective, more personalized and less costly through such new enabling technologies as diagnostic and pathogen-detection devices. For example, molecular diagnostic devices will provide rapid means of testing for both genetic and pathogenic diseases during surgeries. Readily available genetic testing will hasten disease diagnosis and help physicians decide on the optimal treatment for each patient. Advances in regenerative medicine almost certainly will parallel these developments in diagnostic and treatment protocols. Replacement organs such as kidneys and livers could be developed by 2030. These new disease management technologies will increase the longevity and quality of life of the world’s ageing populations.

2. Human augmentation technologies, ranging from implants and prosthetic and powered exoskeleton to brains enhancements, could allow civilian and military people to work more effectively, and in environments that were previously inaccessible. Elderly people may benefit from powered exoskeletons that assist wearers with simple walking and lifting activities, improving the health and quality of life for aging populations. Progress in human augmentation technologies will likely face moral and ethical challenges.

Conclusion

The US National Intelligence Council report asserts that “a shift in the technological center of gravity from West to East, which has already begun, almost certainly will continue as the flows of companies, ideas, entrepreneurs, and capital from the developed world to the developing markets increase”. I am not convinced that this shift will “almost certainly” happen. While the East, in particular Asia, will likely see the majority of technological applications, the current innovations are taking place mainly in the West. And I don’t think it is a sure bet that the center of gravity for technological innovation will shift to the East.


Impacts of Information Technology on Society in the New Century

In the past few decades there has been a revolution in computing and communications, and all indications are that technological progress and use of information technology will continue at a rapid pace. Accompanying and supporting the dramatic increases in the power and use of new information technologies has been the declining cost of communications as a result of both technological improvements and increased competition. According to Moore’s law the processing power of microchips is doubling every 18 months. These advances present many significant opportunities but also pose major challenges. Today, innovations in information technology are having wide-ranging effects across numerous domains of society, and policy makers are acting on issues involving economic productivity, intellectual property rights, privacy protection, and affordability of and access to information. Choices made now will have long lasting consequences, and attention must be paid to their social and economic impacts.

One of the most significant outcomes of the progress of information technology is probably electronic commerce over the Internet, a new way of conducting business. Though only a few years old, it may radically alter economic activities and the social environment. Already, it affects such large sectors as communications, finance and retail trade and might expand to areas such as education and health services. It implies the seamless application of information and communication technology along the entire value chain of a business that is conducted electronically.

The impacts of information technology and electronic commerce on business models, commerce, market structure, workplace, labour market, education, private life and society as a whole.

1. Business Models, Commerce and Market Structure

One important way in which information technology is affecting work is by reducing the importance of distance. In many industries, the geographic distribution of work is changing significantly. For instance, some software firms have found that they can overcome the tight local market for software engineers by sending projects to India or other nations where the wages are much lower. Furthermore, such arrangements can take advantage of the time differences so that critical projects can be worked on nearly around the clock. Firms can outsource their manufacturing to other nations and rely on telecommunications to keep marketing, R&D, and distribution teams in close contact with the manufacturing groups. Thus the technology can enable a finer division of labour among countries, which in turn affects the relative demand for various skills in each nation. The technology enables various types of work and employment to be decoupled from one another. Firms have greater freedom to locate their economic activities, creating greater competition among regions in infrastructure, labour, capital, and other resource markets. It also opens the door for regulatory arbitrage: firms can increasingly choose which tax authority and other regulations apply.

Computers and communication technologies also promote more market-like forms of production and distribution. An infrastructure of computing and communication technology, providing 24-hour access at low cost to almost any kind of price and product information desired by buyers, will reduce the informational barriers to efficient market operation. This infrastructure might also provide the means for effecting real-time transactions and make intermediaries such as sales clerks, stock brokers and travel agents, whose function is to provide an essential information link between buyers and sellers, redundant. Removal of intermediaries would reduce the costs in the production and distribution value chain. The information technologies have facilitated the evolution of enhanced mail order retailing, in which goods can be ordered quickly by using telephones or computer networks and then dispatched by suppliers through integrated transport companies that rely extensively on computers and communication technologies to control their operations. Nonphysical goods, such as software, can be shipped electronically, eliminating the entire transport channel. Payments can be done in new ways. The result is disintermediation throughout the distribution channel, with cost reduction, lower end-consumer prices, and higher profit margins.

The impact of information technology on the firms’ cost structure can be best illustrated on the electronic commerce example. The key areas of cost reduction when carrying out a sale via electronic commerce rather than in a traditional store involve physical establishment, order placement and execution, customer support, strong, inventory carrying, and distribution. Although setting up and maintaining an e-commerce web site might be expensive, it is certainly less expensive to maintain such a storefront than a physical one because it is always open, can be accessed by millions around the globe, and has few variable costs, so that it can scale up to meet the demand. By maintaining one ‘store’ instead of several, duplicate inventory costs are eliminated. In addition, e-commerce is very effective at reducing the costs of attracting new customers, because advertising is typically cheaper than for other media and more targeted. Moreover, the electronic interface allows e-commerce merchants to check that an order is internally consistent and that the order, receipt, and invoice match. Through e-commerce, firms are able to move much of their customer support on line so that customers can access databases or manuals directly. This significantly cuts costs while generally improving the quality of service. E-commerce shops require far fewer, but high-skilled, employees. E-commerce also permits savings in inventory carrying costs. The faster the input can be ordered and delivered, the less the need for a large inventory. The impact on costs associated with decreased inventories is most pronounced in industries where the product has a limited shelf life (e.g. bananas), is subject to fast technological obsolescence or price declines (e.g. computers), or where there is a rapid flow of new products (e.g. books, music). Although shipping costs can increase the cost of many products purchased via electronic commerce and add substantially to the final price, distribution costs are significantly reduced for digital products such as financial services, software, and travel, which are important e-commerce segments.

Although electronic commerce causes the disintermediation of some intermediaries, it creates greater dependency on others and also some entirely new intermediary functions. Among the intermediary services that could add costs to e-commerce transactions are advertising, secure online payment, and delivery. The relative ease of becoming an e-commerce merchant and setting up stores results in such a huge number of offerings that consumers can easily be overwhelmed. This increases the importance of using advertising to establish a brand name and thus generate consumer familiarity and trust. For new e-commerce start-ups, this process can be expensive and represents a significant transaction cost. The openness, global reach, and lack of physical clues that are inherent characteristics of e-commerce also make it vulnerable to fraud and thus increase certain costs for e-commerce merchants as compared to traditional stores. New techniques are being developed to protect the use of credit cards in e-commerce transactions, but the need for greater security and user verification leads to increased costs. A key feature of e-commerce is the convenience of having purchases delivered directly. In the case of tangibles, such as books, this incurs delivery costs, which cause prices to rise in most cases, thereby negating many of the savings associated with e-commerce and substantially adding to transaction costs.

With the Internet, e-commerce is rapidly expanding into a fast-moving, open global market with an ever-increasing number of participants. The open and global nature of e-commerce is likely to increase market size and change market structure, both in terms of the number and size of players and the way in which players compete on international markets. Digitized products can cross the border in real time, consumers can shop 24 hours a day, seven days a week, and firms are increasingly faced with international online competition. The Internet is helping to enlarge existing markets by cutting through many of the distribution and marketing barriers that can prevent firms from gaining access to foreign markets. E-commerce lowers information and transaction costs for operating on overseas markets and provides a cheap and efficient way to strengthen customer-supplier relations. It also encourages companies to develop innovative ways of advertising, delivering and supporting their product and services. While e-commerce on the Internet offers the potential for global markets, certain factors, such as language, transport costs, local reputation, as well as differences in the cost and ease of access to networks, attenuate this potential to a greater or lesser extent.

2. Workplace and Labour Market

Computers and communication technologies allow individuals to communicate with one another in ways complementary to traditional face-to-face, telephonic, and written modes. They enable collaborative work involving distributed communities of actors who seldom, if ever, meet physically. These technologies utilize communication infrastructures that are both global and always up, thus enabling 24-hour activity and asynchronous as well as synchronous interactions among individuals, groups, and organizations. Social interaction in organizations will be affected by use of computers and communication technologies. Peer-to-peer relations across department lines will be enhanced through sharing of information and coordination of activities. Interaction between superiors and subordinates will become more tense because of social control issues raised by the use of computerized monitoring systems, but on the other hand, the use of e-mail will lower the barriers to communications across different status levels, resulting in more uninhibited communications between supervisor and subordinates.

That the importance of distance will be reduced by computers and communication technology also favours telecommuting, and thus, has implications for the residence patterns of the citizens. As workers find that they can do most of their work at home rather than in a centralized workplace, the demand for homes in climatically and physically attractive regions would increase. The consequences of such a shift in employment from the suburbs to more remote areas would be profound. Property values would rise in the favoured destinations and fall in the suburbs. Rural, historical, or charming aspects of life and the environment in the newly attractive areas would be threatened. Since most telecommuters would be among the better educated and higher paid, the demand in these areas for high-income and high-status services like gourmet restaurants and clothing boutiques would increase. Also would there be an expansion of services of all types, creating and expanding job opportunities for the local population.

By reducing the fixed cost of employment, widespread telecommuting should make it easier for individuals to work on flexible schedules, to work part time, to share jobs, or to hold two or more jobs simultaneously. Since changing employers would not necessarily require changing one’s place of residence, telecommuting should increase job mobility and speed career advancement. This increased flexibility might also reduce job stress and increase job satisfaction. Since job stress is a major factor governing health there may be additional benefits in the form of reduced health costs and mortality rates. On the other hand one might also argue that technologies, by expanding the number of different tasks that are expected of workers and the array of skills needed to perform these tasks, might speed up work and increase the level of stress and time pressure on workers.

A question that is more difficult to be answered is about the impacts that computers and communications might have on employment. The ability of computers and communications to perform routine tasks such as bookkeeping more rapidly than humans leads to concern that people will be replaced by computers and communications. The response to this argument is that even if computers and communications lead to the elimination of some workers, other jobs will be created, particularly for computer professionals, and that growth in output will increase overall employment. It is more likely that computers and communications will lead to changes in the types of workers needed for different occupations rather than to changes in total employment.

A number of industries are affected by electronic commerce. The distribution sector is directly affected, as e-commerce is a way of supplying and delivering goods and services. Other industries, indirectly affected, are those related to information and communication technology (the infrastructure that enables e-commerce), content-related industries (entertainment, software), transactions-related industries (financial sector, advertising, travel, transport). eCommerce might also create new markets or extend market reach beyond traditional borders. Enlarging the market will have a positive effect on jobs. Another important issue relates to inter linkages among activities affected by e-commerce. Expenditure for e-commerce-related intermediate goods and services will create jobs indirectly, on the basis of the volume of electronic transactions and their effect on prices, costs and productivity. The convergence of media, telecommunication and computing technologies is creating a new integrated supply chain for the production and delivery of multimedia and information content. Most of the employment related to e-commerce around the content industries and communication infrastructure such as the Internet.

Jobs are both created and destroyed by technology, trade, and organizational change. These processes also underlie changes in the skill composition of employment. Beyond the net employment gains or losses brought about by these factors, it is apparent that workers with different skill levels will be affected differently. E-commerce is certainly driving the demand for IT professionals but it also requires IT expertise to be coupled with strong business application skills, thereby generating demand for a flexible, multi-skilled work force. There is a growing need for increased integration of Internet front-end applications with enterprise operations, applications and back-end databases. Many of the IT skill requirements needed for Internet support can be met by low-paid IT workers who can deal with the organizational services needed for basic web page programming. However, wide area networks, competitive web sites, and complex network applications require much more skill than a platform-specific IT job. Since the skills required for e-commerce are rare and in high demand, e-commerce might accelerate the up skilling trend in many countries by requiring high-skilled computer scientists to replace low-skilled information clerks, cashiers and market salespersons.

3. Education

Advances in information technology will affect the craft of teaching by complementing rather than eliminating traditional classroom instruction. Indeed the effective instructor acts in a mixture of roles. In one role the instructor is a supplier of services to the students, who might be regarded as its customers. But the effective instructor occupies another role as well, as a supervisor of students, and plays a role in motivating, encouraging, evaluating, and developing students. For any topic there will always be a small percentage of students with the necessary background, motivation, and self-discipline to learn from self-paced workbooks or computer assisted instruction. For the majority of students, however, the presence of a live instructor will continue to be far more effective than a computer assisted counterpart in facilitating positive educational outcomes. The greatest potential for new information technology lies in improving the productivity of time spent outside the classroom. Making solutions to problem sets and assigned reading materials available on the Internet offers a lot of convenience. E-mail vastly simplifies communication between students and faculty and among students who may be engaged in group projects. Advances in information technology will affect the craft of teaching by complementing rather than eliminating traditional classroom instruction. Indeed the effective instructor acts in a mixture of roles. In one role the instructor is a supplier of services to the students, who might be regarded as its customers. But the effective instructor occupies another role as well, as a supervisor of students, and plays a role in motivating, encouraging, evaluating, and developing students. For any topic there will always be a small percentage of students with the necessary background, motivation, and self-discipline to learn from self-paced workbooks or computer assisted instruction. For the majority of students, however, the presence of a live instructor will continue to be far more effective than a computer assisted counterpart in facilitating positive educational outcomes. The greatest potential for new information technology lies in improving the productivity of time spent outside the classroom. Making solutions to problem sets and assigned reading materials available on the Internet offers a lot of convenience. E-mail vastly simplifies communication between students and faculty and among students who may be engaged in group projects.

Although distance learning has existed for some time, the Internet makes possible a large expansion in coverage and better delivery of instruction. Text can be combined with audio/ video, and students can interact in real time via e-mail and discussion groups. Such technical improvements coincide with a general demand for retraining by those who, due to work and family demands, cannot attend traditional courses. Distance learning via the Internet is likely to complement existing schools for children and university students, but it could have more of a substitution effect for continuing education programmes. For some degree programmes, high-prestige institutions could use their reputation to attract students who would otherwise attend a local facility. Owing to the Internet’s ease of access and convenience for distance learning, overall demand for such programmes will probably expand, leading to growth in this segment of e-commerce.

As shown in the previous section, high level skills are vital in a technology-based and knowledge intensive economy. Changes associated with rapid technological advances in industry have made continual upgrading of professional skills an economic necessity. The goal of lifelong learning can only be accomplished by reinforcing and adapting existing systems of learning, both in public and private sectors. The demand for education and training concerns the full range of modern technology. Information technologies are uniquely capable of providing ways to meet this demand. Online training via the Internet ranges from accessing self-study courses to complete electronic classrooms. These computer-based training programmes provide flexibility in skills acquisition and are more affordable and relevant than more traditional seminars and courses.

4. Private Life and Society

Increasing representation of a wide variety of content in digital form results in easier and cheaper duplication and distribution of information. This has a mixed effect on the provision of content. On the one hand, content can be distributed at a lower unit cost. On the other hand, distribution of content outside of channels that respect intellectual property rights can reduce the incentives of creators and distributors to produce and make content available in the first place. Information technology raises a host of questions about intellectual property protection and new tools and regulations have to be developed in order to solve this problem.

Many issues also surround free speech and regulation of content on the Internet, and there continue to be calls for mechanisms to control objectionable content. However it is very difficult to find a sensible solution. Dealing with indecent material involves understanding not only the views on such topics but also their evolution over time. Furthermore, the same technology that allows for content altering with respect to decency can be used to filter political speech and to restrict access to political material. Thus, if censorship does not appear to be an option, a possible solution might be labelling. The idea is that consumers will be better informed in their decisions to avoid objectionable content.

The rapid increase in computing and communications power has raised considerable concern about privacy both in the public and private sector. Decreases in the cost of data storage and information processing make it likely that it will become practicable for both government and private data-mining enterprises to collect detailed dossiers on all citizens. Nobody knows who currently collects data about individuals, how this data is used and shared or how this data might be misused. These concerns lower the consumers’ trust in online institutions and communication and, thus, inhibit the development of electronic commerce. A technological approach to protecting privacy might by cryptography although it might be claimed that cryptography presents a serious barrier to criminal investigations.

It is popular wisdom that people today suffer information overload. A lot of the information available on the Internet is incomplete and even incorrect. People spend more and more of their time absorbing irrelevant information just because it is available and they think they should know about it. Therefore, it must be studied how people assign credibility to the information they collect in order to invent and develop new credibility systems to help consumers to manage the information overload.

Technological progress inevitably creates dependence on technology. Indeed the creation of vital infrastructure ensures dependence on that infrastructure. As surely as the world is now dependent on its transport, telephone, and other infrastructures, it will be dependent on the emerging information infrastructure. Dependence on technology can bring risks. Failures in the technological infrastructure can cause the collapse of economic and social functionality. Blackouts of long-distance telephone service, credit data systems, and electronic funds transfer systems, and other such vital communications and information processing services would undoubtedly cause widespread economic disruption. However, it is probably impossible to avoid technological dependence. Therefore, what must be considered is the exposure brought from dependence on technologies with a recognizable probability of failure, no workable substitute at hand, and high costs as a result of failure.

The ongoing computing and communications revolution has numerous economic and social impacts on modern society and requires serious social science investigation in order to manage its risks and dangers. Such work would be valuable for both social policy and technology design. Decisions have to be taken carefully. Many choices being made now will be costly or difficult to modify in the future.


© 2019 Digital Technology

Theme by Anders NorénUp ↑