The Role of Information Technology in Modernising the Courts

Written By all909 on Sunday, 8 September 2013 | 22:31

Introduction

It is a singular honour and privilege to me to be offered this opportunity to address so distinguished an assembly, an assembly of the most eminent legal minds in Southern Central and East Africa on a subject that is so relevant to the courts in the region as they strive to deliver a service to the communities they serve, in a world that is undergoing immense technological change, whilst our region continues to suffer an absence of adequate resources. Courts are being called upon to deliver services against a backdrop of an increasing caseload on a declining resource base.

Your lordships, honourables, ladies and gentlemen, you have the unenviable task of leading the judiciaries in the region during this period of both immense challenges and ground breaking opportunities. It is my hope this morning that we shall explore with you the opportunities and challenges created by a new age, the information age.

Information Technology

We are half way through the first decade of the twenty first century. We have well and truly entered the new millennium. At the same time, the Industrial Revolution that began three centuries or so ago, has given way, for the Industrialised world, to the Information Age. The birth of the information age is as earth shaking as the Industrial Revolution in terms of how we work, transmit, store and retrieve information. And yet it appears to be still in its infancy! Jean-Francois Rischard put it this way.

                                                        "…the plummeting costs of communicating and computing present enormous opportunities for developing and developed countries alike, to do things, cheaper, differently. This is the heart of the information revolution, a tectonic shift that differs from previous economic breakpoints because it is not about transforming energy or matter, but about manipulating, transporting and storing information and knowledge."[2]



Martin Bangeman has stated,

 “Throughout the world, information and communication technologies are generating a new industrial revolution already as significant and far-reaching as those of the past. It is a revolution, itself the expression of human knowledge. Technological progress now enables us to process, store, retrieve and communicate information in whatever form it may take, unconstrained by distance, time and volume. This revolution adds huge new capacities to human intelligence and constitutes a resource which changes the way we work together and the way we live together."[3]

What does this information age involve? The information age is revolving around the advances so far made in Telecommunications and Information technology. These consist of hardware, software, and media for collection, storage, processing, transmission, and presentation of information. We are talking of communication and computing equipment and programmes, which include satellites, switches (phone exchanges), transmission lines, computers, modems, operating systems and applications.

Of what relevance is this revolution to a judicial system? If the way people work, live, and play is changing this would no doubt affect the administration of justice as it is part of this changing world. The Judiciary ought to take advantage of the new developments that may enhance the delivery of its own services.

The changes that come with the availing of information to all, or rather the potential availability of information to all, within the information age, will no doubt affect how part of our population relates to the courts. At the same time, since information technology serves people, issues that arise due to the new way of working, living and playing will become matters for those involved in the administration of justice to deal with, as crime and civil disputes take new forms and actions. As noted by Natalia Schiffrin,

 "But while the internet enhances freedom of expression by allowing for free and effectively unregulated communication, it has also facilitated a great deal of crime. The dissemination of child pornography, not to mention fraud, gambling, blackmail, and cyber stalking are all on the rise. Even incitement to murder is occurring over the internet…."[4]

It is the duty of any judicial system to prepare and meet these challenges. And at the same time it is the duty of the Judiciary to take advantage of the new opportunities offered by information technology to offer a professionally excellent service to the community. Nothing less is expected of us.

There is, however, a word of caution from the very outset for societies that fall on the disadvantaged side of the digital divide, ‘the information have not societies’, in which the vast majority of their communities live outside of this information revolution, somewhat akin to living on the fringes of the industrial society. For societies where penetration of electricity and telephone is less than ten percent of the population, it is clear that there are challenges in regard to the majority of the population who have no access to such amenities to the possibility of access to information age developments. For just as there is the information divide between nations, so is there a divide in the information disadvantaged societies, between a very tiny class of those who have access and the majority who have little hope of access to the information age developments.

Modernising the Judiciary

The main business of the judiciary is to hear and determine cases in a fair and timely manner at reasonable cost. In doing so there are processes that lead to the conclusion of the cases before the courts. These processes must be efficient, effective, and equitable.

The processes must be efficient in the sense that they provide value for money. The resources so employed must be utilised in a non-wasteful manner leading to the most optimum allocation and utilisation of the same. The system can not be engaged in an abstract search for the truth alone, oblivious of all other factors, like cost, efficacy, and equity. The modern approach calls for balancing of various objectives of the justice systems, given the scarcity of resources, and the competing demands for the limited resource envelope available, particularly in the resource strapped societies, as in our region.

Secondly the processes must be effective in the sense that they are able to achieve that which is sought. For instance is the system able to ensure accountability for the wrongs committed against the society. Or is the relief sought and obtained able to compensate the injury complained of?  Going to court is not simply an academic exercise, though in some instances, the nature of matter at hand may be somewhat academic, but nevertheless necessary to be addressed.

The process must be equitable in that all those who ought to have access to the justice system and seek access to it do have access to it. The process must not lock out sections of the community. Neither should it be discriminatory, or show partiality to a class of litigants or some areas of subject matter.

How does IT then enable the courts to be modern that is efficient, effective and equitable?

IT can be  a useful tool in the following areas: (1) text creation, storage and retrieval; (2) Improved Access to the Law; (3) Recording of Court Proceedings; (4) Case Management and producing data for administrative purposes; (5) Continuing Education; (6) Communication

Text Creation, Storage and Retrieval

Apart from the hearing function, judges have to produce written judgements, rulings, and reasons for the decisions that they continuously make. After the advent of the typewriter, the judge often wrote decisions in long hand, and the secretaries or typists would then type the same out in typescript. It is now possible for the judge to type out the decision directly on the computer. And there are many reasons now why the judge should be familiar with word processing skills. A judge is able to produce a decision much faster that way. And because of the ability to manipulate different documents through copy, cut and or paste, or working from templates, or using micros, it is now much easier to produce a document with the information you want included into to. On the same computer or other storage medium, it is possible to store the document, and retrieve it very fast, call up other documents, without having to move from your work station. In the result judgements, decisions and or rulings can be produced much faster in final form for release to the parties. Simultaneously the said decisions, judgements or rulings can go into a court system database to which judges and other people may have access should they need to use the same for whatever purpose. IT definitely makes production and release of decisions much more efficient than was previously the case.

Most of the documents in our case files be they from advocates or the court, are generated on computers. This means that copies of the same are available electronically as they are produced digitally. And even if they have been produced manually, and only hard copies are available it is possible to scan them and convert them into digital format. This creates an opportunity of creating and maintaining and electronic copy of case file that would eliminate problems of loss of the physical file which plagued our courts in Uganda for quite sometime in the past. The courts have the capacity to acquire the necessary hardware for this purpose. If an electronic version of the court file was maintained it would speed the cost of preparing a record for appeal purposes, thus eliminating one of the bottlenecks to the speedy delivery of justice.

Improved Access to the Law

In many jurisdictions the law applicable is often found in different sources. These include Statute Books for legislation; Law Reports for case law and Oral Tradition for Customary Law. The medium for storage of the legislation and case law was, previous to the advent of the current information technology, only available through hard copy in book form or printed or typescript. The traditional approach in some jurisdictions was regularly to produce an up-to-date version in the form of one edition of the laws in force at a particular time. In Uganda, in particular, the situation got out of hand with the 1964 Edition of the Laws of Uganda remaining unrevised until recently. And even then the revision so far is partial, limited to the principal legislation only. The edition got out of print. It was out of date to a significant part. Determining the applicable law was often quite cumbersome. Law Reporting collapsed thirty years back, and efforts to revive the same are on going without being successful to-date.

It is now possible to keep both legislation and law reports, not only in hard copy and book form, but also in digital format, on CDs and other storage media, online (Internet/Intranet), or on stand alone machines making it much easier for a judge or member of the public to search and obtain the provisions of the law or previous court decision that one desires. With the use of the Internet, it is possible to seek for and obtain comparative and persuasive jurisprudence from other jurisdictions while seated at one’s work station.

What makes the situation even much more promising is that document production now is digital making it easy to copy and distribute information at very little cost. It is now possible therefore for the law to be available in an easier, more convenient and most accessible format. It makes it simpler to research and incorporate the results of the research into new documents being produced. IT has the potential to tremendously improve access to the law, improving the productivity of the consumers of the same, and possibly both the quality and quantity of what they produce, thus increasing both the efficiency and efficacy of the their product.

It may be noted that the judges of the Supreme Court, Court of Appeal, and the High Court in Kampala, or at least the majority of them, do have computers and are connected to the internet. A number are known to make use of the internet for electronic legal research. A significant number too is known never to switch on these computers too!

Recording of Court Proceedings

For a long time here in Uganda and elsewhere court proceedings were recorded in long hand by the judge/magistrate. In some jurisdictions court reporters recorded the proceedings using stenographic machines using shorthand, and later produced a record of the proceedings. In other jurisdictions recording was by way of tape recorders recording voice and the record later being transcribed into a typed record. There have been new developments. Voice recognition technologies are being tested but are as yet to be perfected. It is now possible to have digital audio recordings of voice on the computer, allowing the judge capacity to annotate this record and listen to whatever portion he may want to listen to later. The record so recorded would have to be transcribed into a hard copy format (for as long as a hard copy file is maintained), of which e-versions would be available too. It is also possible to have instantaneous recording of proceedings by court reporter which can be viewed by the judge and counsel at their respective desks as the proceedings continue. The advantage of the digital format is that it is easy to manoeuvre whether it is text, voice (sound) or images.

With the use of IT the pace of proceedings may be speeded up considerably. The quality of the record is enhanced immensely as it is far more accurate. Cases ought to be resolved faster, both at trial, and on appeal. This would be the result of the easy availability of the record of the trial. With Judges freed from the task of recording proceedings, they can pay more attention to the function for which they are hired. And that is judging.

Case Management

Computing has greatly enhanced our capacity to capture study and manipulate data producing reports and other records that one might be interested in. It is possible using programmes that can be developed to track events and cases with a view to availing the decision maker information in a timely manner. Computing is able to do so in considerably much less time than if the same were done manually. Equipped with this information, it is possible for the decision maker to take appropriate action, to move a case forward, or to assign it, list it for trial or take whatever action is appropriate. One is able to follow both the large picture in terms of the aggregate of cases and the small picture, in terms of a single case. Productions of forms and other repetitive processes can be automated. In Uganda this has been embraced with the development of CCAS (Computerised Case Administration System) and MIS (Management Information System).

Communication

It is both in the interests of the Judiciary and in the public interest that the public gets to know and understand what is going on the Judiciary in relation to its mandate. The public ought to know what problems the judiciary is having and what it is doing to tackle them. The public ought to know what the judiciary is doing with the resources entrusted to them in carrying out its mandate. The judiciary does not often have the same platforms as other organs of government. It does not control the purse strings of government or the coercive machinery of government in the manner that both the legislature and executive do. The authority of the judiciary ultimately rests on the confidence that the public has in the services it offers to the public. It is therefore important that the judiciary is able to communicate to the public. One of the easier means of doing so is to go online with the requisite information about activities, problems, and solutions taken to tackle the same in the form of timely reports and updates. Because of the limited access of our people to online resources, the audience may be limited. Nevertheless because of the possibility of reuse of that information by media houses, and other people it is possible it would still reach a wider audience than initially anticipated.

In this regard it should be noted that the Judiciary in Uganda has a web site at http://www.judicature.go.ug . Unfortunately it is static for most of the time, and not fully developed. For the last three years it has not seriously been attended to. Though, as a rare exception I must point out that in the last three years or so, at least they have temporarily posted to the page at different occasions three decisions of the Constitutional Court and or Supreme Court on appeal from the Constitutional Court that were of immense public interest.

The information the web page purports to deliver is not there. For instance it has a cause list section but this is mostly blank, at least for all the times I have checked on it. This only frustrates the intended recipients of the information, and does not add public confidence to the image of the judiciary.

IT affords the courts not only an opportunity to communicate with the public through the internet, but also affords an opportunity to allow for internal communication within the organisation through Intranets and electronic mail. There may be information to which the public may not be privy too which could be kept on intranets accessible only to relevant category of officers in the organisations. At the same time paperless communication using email programmes is possible between judges and other judicial personnel in and outside of the judiciary is possible at very little cost, and almost instantaneously. All over the world email lists for judges and other professionals exist on which judges are able to share information of a professional nature or merely only recreational.

The judiciary in Uganda does have email servers and programmes installed for the courts with internet connections. Unfortunately no advantage has been taken of the same to encourage intra organisational communication using these facilities. Of course some officers use the free email programmes on the World Wide Web for communication but this is the result of individual initiative rather than organisational arrangements.





Training

As a tool for training there are several computer based modules that can assist you to develop your computer related skills to functional levels. This will include word processing, typing, use of the internet, and many others. Training modules are available on floppy disks, CDs', and via the internet. This form of training is convenient because you can consume it at your own pace, at a time of your choosing, and may be available all the time, should you need to consult the module. It is also possible to pursue continuing professional, academic or other programmes through internet based distance education.

Pitfalls in Acquisition and Deployment of IT

After having extolled the virtues of adoption and deployment of IT, it is important that mention is made of some critical factors for the success of adoption of IT. IT acquisition is not an end in itself. It is a tool. The process of acquiring this rather highly sophisticated tool is quite important as it will impact on whether the acquisition of IT meets the goals set and intended benefits.

Research in the US has established that there is a significant failure rate in IT projects both in the private and public sectors.  There are many reasons advanced for this failure and these include:

    Lack of top management commitment
    Inadequate planning
    Abandoning the project plan
    Inadequate user input
    Inexperienced project managers
    Flawed technical approach
    Anticipating advances in technology
    Failure to satisfy user needs
    Inadequate documentation
    Unwieldy procurement processes
    Burdensome oversight reviews
    Unrealistic Cost Estimates
    Imprecise specifications
    Non-compliance by Vendors[5]

Uganda’s Experience

IT acquisition in the judiciary in Uganda started with sporadic purchase of computers for word processing by secretaries. Then in 1994 or thereabouts, there was the law reporting project (the Justice Porter project), under which 15 computers were bought (with donor support) to assist in the production of case digests. The project did not really survive long after the departure of the person who run it.

The largest IT project in Uganda has been Computerised Case Administration System (CCAS) which was to be followed with the Management Information System (MIS), and was supposed to be subsumed into MIS. The story of CCAS is a long one and if, as an institution we are able to learn from the mistakes suffered in implementing this project, newer IT projects would have a much higher chance of success within the time periods planned. MIS has not come into operation, much as it had been planned that it would be operational by the year 2003. It is not known when MIS will now come into operation.

In 1999, the Chief Justice appointed a Technology Committee to be responsible for advising the judiciary on IT matters, drawing up an automation and technology plan for the adoption of IT in the judiciary.  With the help of consultants the first IT strategy for the years 2000 to 2005 was adopted in the year 2000. A second plan has been adopted for the years 2005 to 2008.

The judiciary in Uganda produced The Strategic Plan for the Uganda Judiciary 2002/3 to 2006/07, which is intended to be the blueprint for realisation of the vision and mission of the judiciary in Uganda. The strategic Plan makes no mention of the first IT or Second IT plan/strategy, either to acknowledge the existence of either of them, or to incorporate those plans into the main strategic plan. Is this a strategic omission? The strategic plan for the judiciary gives some mention to CCAS and MIS, overlooking the other sub-components in the first and second IT plan/strategy. By ignoring IT strategy as a whole, in the Strategic Plan for the Judiciary, an impression may be created that those responsible for the drawing and approving the Strategic Plan, have no commitment towards implementation of the IT strategy for the judiciary, and are only concerned with the CCAS and MIS.

This fascination with CCAS and MIS, to the exclusion of other IT projects, is historical and is deserving of a separate study of its own. Nevertheless for the purposes of this paper, it is important to note that the emphasis on CCAS (producing data for decision making by senior or top management), probably reflects the importance attached to CCAS by  (1) the major donor supporting the project and the judiciary, (2) consultants hired to design, supply, and install the same and; (3) top management in the judiciary, in comparison to other IT tools necessary to raise the productivity of a judge/magistrate who hears and determines cases, which CCAS tracks. The consultants that were retained to develop CCAS, and MIS, are the same ones that were retained to produce the Strategic Plan for the Judiciary. Familiarity with the former led to their exclusive treatment in the latter, with no ostensible intervention from the judiciary to correct the anomaly. This raises the question of user input into the Strategic Plan for the Judiciary and the commitment of the judiciary to IT Strategic Plans it developed, or simply put the commitment of the judiciary to the use of IT, automation and innovation.

One of the most recent projects the judiciary has undertaken is a pilot court recording project with the provision of analogue audio recording systems for several magistrates courts located in the different regions of the country, the Court of Appeal and Supreme Court and a digital recording system for the two selected High Court courtrooms in Kampala and Jinja. These projects were implemented in 2003 and 2004. The evaluation reports in respect of this project are very instructional.

“The Court of Appeal received equipment late compared to the other courts. However, the initial attempt to operate the equipment ended in failure as the person who was trained was not deployed to carry out recording. Instead, the trainee who failed the test in the first batch of training was deployed to carry out the recording.

Problems encountered.

·        The Operator carried out recording with the recorder speaker turned on. The recording therefore carried echoes, which made the job of transcribing impossible.

·        The operator also mishandled microphones leading to breaking of one of the signal pins on one microphone.

Solutions offered

·        The right operator was deployed to carry out the recording function effective the date of commissioning and she is performing fine.

·        The broken microphone has been returned to our workshop for repair.”

“3.3 Kampala High Court

This is one of the pilots using digital recording equipment.

3.3.1 Problems Encountered

·        Initially, there was a lot of interference in the recorded sound.

·        There is need to customise the recording software interface to match the court system in Uganda

3.3.2 Solution Offered

·        The interface was traced to poor or no earth wiring of the electrical supply in the court building. A proper earth wiring was implemented and modifications done to the mixer to improve sound quality.

·        We are still awaiting customisation details from the court for us to implement the changes.”

“3.4 Jinja High Court

This is the second pilot using digital recording equipment.

3.4.1 Problems Encountered

·        No problems have been reported in this court. However the resident judge whose court uses the equipment is mainly out of station covering cases in Mukono. So far 4 successful recordings have been made.

·        There is need to customise the recording software interface to match the court system in Uganda as is the case for Kampala.”[6]  (Emphasis is mine.)

The consultants make some general observations which are relevant to the success of the project.

“4.1 General Observations

Our monitoring team has established that where there is administrative will, the pilot is showing positive results. In some sites there is general laxity typical of the civil service in the country.

There is danger that for those sites that have not received their equipment, the trained personnel will soon forget what they learnt and this can frustrate the project.

In addition, there is the danger of transferring trained personnel to other offices, contrary to what was emphasized during selection of trainees. If this is not stopped the project will definitely fail due to lack of trained manpower.”

“4.3 Administrative Issues

The absence of enthusiasm in some pilot sites should be addressed. Senior staff in the judiciary should pay visits to the pilot sites and emphasise the seriousness of the pilot project in the future capacity of the courts in delivering justice.

Personnel who received training in court recording and transcribing should be left to work at their allocated sites, at least for the pilot stage in order that the project gets a fair evaluation.”[7] (Emphasis is mine.)

Six months later the consultants issued the second and last evaluation and monitoring report on the project in respect of those sites where the court recording was already installed. Again the comments are quite instructive.

“3.3 Kampala High Court

This is one of the pilots using digital recording equipment. There has been very little progress at this court.

3.3.1 Problems encountered

·        The supplier has not yet attended to the problem of customising the software to fit the court requirements.

·        In addition, the UPS serving the recording computer failed and has been returned to the supplier for repair.

3.3.2 Solution suggested

·        The suppliers of the digital recording software should complete the customisation.

·        The suppliers of the UPS should expedite repair work or replace the UPS under warranty.

3.4 Jinja High Court

This is the second pilot using digital recording equipment. There is no progress at this court.

3.4.1 Problems encountered

·        The staff member who was responsible for this project left the Judiciary and the equipment has not been handed to another person.

3.4.2 Solution suggested

Recruit and train a new person or deploy excess staff from Kampala High Court.”[8]

From the foregoing it is evident that many of the problems recognised in the research carried out in the US on the reasons for the failed IT projects are reflected in the problems encountered with the implementation of the Pilot Court Recording Project. These include lack of management commitment to the project, especially in the High Court, inadequate planning especially with regard to staff deployment; inadequate user in put into the requirements of the software interface; inexperienced project managers; failure to satisfy user needs, imprecise specifications and non-compliance by vendors.

There are other problems that plague the IT sector in the judiciary. One of the most significant of these problems is staffing. In the traditional establishment for the Judiciary, there was, as it was to be expected, no provision for IT staff, as the sector only emerged quite recently. When the judiciary started on IT projects it still had no provision for IT staff. Eventually provision was made for three staff positions on the establishment at the top end of the ladder. Despite some effort to expand this number to seven staff positions, the minimum necessary, given the extensive outlay of IT equipment and services, the judiciary has failed to get the approval of the Ministry of Public Service which is in charge of this function. The result is that the judiciary has an IT infrastructure and services without the number of staff required to maintain and run the same. The result is a very unsatisfactory state of affairs. For instance staff have been appointed as System Administrators who have no qualification or skills whatsoever necessary for that office. This situation is intolerable. End users do not have the support that they ought to have.

Conclusion

I have endeavoured to show that information technology is now a tool essential for modernisation of a judiciary or judicial system. But it is only a tool, and if not handled with skill and commitment, it may instead frustrate efforts at modernisation. The process of adoption of IT is as important, or, probably even more important, than just the purchase and installation of IT hardware and software itself. If the process is flawed, it is unlikely that the expected benefits will flow from the IT acquired. It could easily turn out to be a waste of scarce resources with equipment left to gather dust, as its life comes to an end, for IT equipment does have a short lifespan in terms of obsolescence.

Information Technology creates both opportunities and challenges. These opportunities and challenges need to be fully grasped, and mastered, if the institutions that you lead are  to take full benefit of what Information Technology offers.

I thank you for listening to me.

Social Gaming Startup Socialspiel Recruits Another Ex-Rockstar Games Veteran As CEO, Raises Further €200K

Written By all909 on Friday, 30 August 2013 | 01:18



Socialspiel, the Austrian social games startup founded by ex-Rockstar Games employees, has announced a small additional funding round, and with it a change of guard at the top of the company. The new funding — €200,000 — is led by German angel fund FLOOR13, with participation from games industry “veterans” Clemens Beer and Mike Borras. It brings the total raised by the 2010-founded company to a little under €500,000.
Of more significance, however, is that Borras has been recruited as CEO, taking over from co-founder Helmut Hutterer who transitions to the role of COO. Prior to his new role heading up Socialspiel, Borras was co-founder of local search startup Tupalo and, unsurprisingly, before that he was at Rockstar Games. The Rockstar Games legacy doesn’t end there, either. New investor Beer was previously a lead developer at Rockstar, and is also currently CEO of Tupalo.
Both Borras and Beer will join the board of Socialspiel, while the new funding will be used to grow the company’s development teams — headcount is expected to reach 15-20 employees by Spring of 2014, up from the current 11 — and to support Socialspiel’s hybrid business model of developing its own IP/games, and working with third-party IP/brands to help them jump on the social/mobile gaming bandwagon. I also understand that the startup became cash flow positive in 2012, and is on track to make a profit in 2013.
Socialspiel’s two-pronged approach of social games co-development, in addition to self-publishing, makes a lot of sense for a relatively small player and in light of the fickle nature of casual gaming and the plight of the Zingas of this world. By working with established brands/IP, who traditionally fund the upfront cost of development, the risks are somewhat mitigated in what can be a very hits-driven market. Brand/marketing-led titles are also arguably a better fit for the free-to-play social, mobile/tablet gaming market.
It’s thus partnered with major IP owners/rights-holders such as Deutsche Telekom, Les Editions Albert Renรฉ, Sproing, and SEE Games, etc., in addition to major brands and advertising agencies including Bartle Bogle Hegarty, and Chupa Chups. “This is a more long-term partnership better suited to these style of games, compared to the short-term nature of the traditional ‘work for hire’ model in the PC/Console games industry,” new Socialspiel CEO Borras tells TechCrunch.
Under this arrangement, Socialspiel fulfills the role of a full-stack production studio. “From pre-production to production to live game, we design, develop, maintain the games, and we also live analyze and tune economy/player metrics,” says Borras. Meanwhile the companies/brands it partners with manage the IP, licensing, merchandising, publishing, distribution, marketing, and player acquisition.
In addition, Socialspiel also self-publishes its own titles. “We develop our own technology and IP in the form of free to play social web, mobile, or tablet games,” adds Borras. “We primarily self-publish these titles, though we have taken our own IP and brought those into our co-development partnerships in the past. Tight Lines Fishing is a perfect example of this, which we’re planning on re-launching with a major co-development partner later in 2014.”
Socialspiel’s competitors include Spooky Cool Labs, the 40-person team recently acquired by Zynga that has developed games around the Wizard of Oz IP.
“For a small startup like ours partnering with major brands who have rich IP catalogs allows us to focus on what we truly do best, which is design and develop award-winning free to play social games, while having a strong partner with massive marketing reach and brand-recognition,” says Borras. “Together we can launch and scale instantly recognizable games in North America and Europe which are both engaging and profitable.”

Hugo: Firefox’s most advanced “search all tabs” extension

Written By all909 on Wednesday, 21 August 2013 | 23:09

Sometimes when you are doing research, you may be interested in finding results for a specific term or phrase on all or most pages that you have open in the web browser. That's something that you cannot do in Firefox by default. What you can do is search each page one by one until you have searched them all. While possible, it is not really practicable at all, especially if we are talking about dozens of pages or even more that need to be searched.
I have reviewed FindBar Tweak in July which offered an option to search in all open tabs in Firefox, and while that worked rather well, the implementation had its shortcomings, like no option to jump to a result straight away.
Hugo is a new extension that improves "search in all tabs" significantly. The extension integrates well into Firefox, and you can access it with a click on the Hugo link in the Find Bar when it is open in the browser.
When you do, you will notice that it will open a sidebar in Firefox that lists all occurrences of the selected phrase in all open tab of the browser. The scan may take a couple of seconds or more, depending on a number of factors including the number of tabs open in the browser.
You can right-click the sidebar to move the results to the bottom, which may work better for you depending on the window size of the browser.
The extension separates results by tab, and displays up to 250 words of context for each search result. You can modify the word count with a click on 250 in the interface, so that it can display between 20 and 2000 words of context for each result. The developer notes that a context increase may slow down the rendering significantly.
A double-click on a result jumps straight to it. If the tab is not active, it will be made the active tab. You can alternatively click on the page title to jump to it as well.
That's not all though. You can change the listing of results to titles only. This displays only the page titles at least one occurrence of the phrase has been found on, but not the results in context.
Hugo ships with a set of filters that enable you to search only select tabs and to add domain names to the ignore list. The ignore list ships with several that include Bing, Yahoo, Google and Wikipedia, and an option to add custom domain filters as well. The filters are not selected by default, and if you want to use them, need to be activated.
The tabs filter works similar, but only for tabs. You can block select tabs from being searched, or select tabs that you want included.


Another set of filters is listed in the middle of the main toolbar. Here you can switch to display results found on the current tab, display results only on domains that are not on the ignore list, or display an inverted domain list (domains that do not contain the phrase).
The extension ships with a speed search keyboard shortcut - Alt-9 - that initiates search for text that you have highlighted on the active website. The option needs to be enabled before the shortcut becomes available.
If you notice performance drops during scans, you may want to throttle the rendering of the results listing in the options as well.

Verdict

Hugo is an excellent extension for Firefox users who use the browser for research. It does not really matter what kind of research, it works well for all kinds. The filtering options help you limit the search to reduce the time it takes to display the list of results, and to avoid results from pages or domains that you do not want included in the search.

Facebook Video Ads: What To Expect


If you complain that your Facebook news feed is too cluttered as it is, brace yourself: According to reports, the social network plans to sell TV-style ads that will appear in your stream alongside posts from your friends.
Rumors of Facebook video ads first surfaced in December when one report said they were set to roll out by "April at the latest." Subsequent reports had the launch date pushed back to mid-October. Facebook has declined to comment.
But one thing is certain: Facebook users hate change, and this new addition to the news feed will likely upset many. But for the social network, which now boasts 1.15 billion users, video ads will be lucrative. Brands can expect to spend between $1 million and $2.5 million a day.
[Do you know what to look for? Read more: How To Spot A Facebook Scam.]
Facebook needs to tread carefully with its rollout of video ads, keeping the user in mind. Here's a look at what we know about the upcoming addition to news feeds, and what Facebook can do to ease the transition.
1. Ads Will Be Brief
You probably won't be happy with watching commercials while browsing your news feed, but remember that the social network is a service, and you use it for free.
Citing anonymous sources, Bloomberg reported that Facebook's video ads -- which will look a lot like short TV ads -- will last 15 seconds. That's a lot shorter than many television ads.
This length was likely strategically chosen: Photo-sharing site Instagram, which Facebook acquired in April, recently launched a video capability that lets users upload and share 15-second videos. Since many Instagram users also post their photos and videos to Facebook, you may already be used to the video length.
What's still unknown is how these ads will be displayed: full-screen or in-line. Full-screen ads are more intrusive to users, but are guaranteed views for advertisers. Ads that play automatically within the news feed are more user-friendly, but may not be as attractive or powerful to advertisers.
2. Ads Will Be Infrequent
You're probably most concerned with how often your browsing will be interrupted by an ad playing. This is an area in which Facebook must tread carefully: Show too many ads and users will abandon the service.
According to reports, you can expect to see commercials in your news feed no more than three times a day. Last week, Facebook CEO Mark Zuckerberg said he's sensitive to how users react to advertising, which is why he plans to limit the number of ads you see to about one for every 20 updates, or comprising about 5% of your news feed.
The set frequency for these commercials seems fair: Think about how often you're served an ad when browsing YouTube. But other factors that determine just how intrusive or not these ads become are still unknown: the placement of the commercial (top of news feed versus further down); how far they're spread out; and whether the content is relevant to those it's targeted to.
It wouldn't be wise for Facebook to place ads at the top of your news feed -- if it's the first thing you see when you log in, many users will be tempted to sign out. Placing the ads further down the news feed -- or after 20 updates like it does for other ads -- seems more reasonable.
3. Your Demographics Determine What You'll See
Facebook's traditional ad platform lets businesses target their audience based on a number of factors, including geography, interests, age, gender, location, relationship status, education and more. Facebook's video ads, according to reports, will only let advertisers target you based on age and gender.
While these targeting options aren't as plentiful as Facebook's traditional ads, they still offer better targeting than what is sold on television, possibly making the price point -- of up to $2.5 million per day -- more worthwhile to executives.
It would be smart for Facebook -- and better for advertisers -- to serve you ads that interest you based on more than just your gender and age. TV networks that offer online streaming give users the option to rate whether or not a commercial was relevant, aiming to serve you more appropriate ads. While it doesn't appear this is in the immediate plans for Facebook's initial video ads, it could be an option in later iterations.
Tell us what you think: Will Facebook ads be too intrusive? What kinds of options would you like to see as a user?


Cloud Adoption: 4 Human Costs

Just a few short decades ago, the most expensive IT resources were computers, and human operators were interchangeable. Now the roles are reversed -- technology assets have become a commodity while organizations place a premium on people.
To that end, the adoption of cloud computing brings with it a series of changes that directly impact the IT workforce. Failing to account for those changes can reduce the value of the cloud and increase IT costs and dysfunction. There are at least four major areas of human cost to assess when planning a cloud strategy and selecting a cloud provider.
No. 1: Cost Of Changed Expectations
Employees aren't rubes when it comes to the cloud. Sure, most people can't differentiate software-as-a-service from platform-as-a-service, but the recent consumerization-of-IT phenomenon has reset expectations. Most people regularly use cloud-based email clients, collaboration tools and even business apps. They've come to expect a new class of services for their digital consumption, and those expectations will be present for any cloud initiative your company starts.
Developers will expect more sophisticated deployments, project teams will expect easier acquisition of environments, and end users will expect their systems to go live faster.
As a result of these expectations, organizations face human costs in a range of areas. What must change? IT organizations must streamline server requisition and approval processes. They must update service catalogs. They may have to update configuration management systems, as well as retool finance systems and processes to move toward IT-as-a-service.
IT operations will see a (major) uptick in requests for temporary environments. One of the transformational aspects of cloud computing is the ease by which you can stand up servers. So once it becomes "easy" to get cloud servers, expect skyrocketing demand for environments to test new software releases, perform proof-of-concepts, execute performance testing or host training instances. Without proper planning, these demands can overwhelm IT organizations already stretched thin.
Fear not. There are ways to prepare for these new expectations. Start small by offering a few new services in the catalog, and constantly iterate over the new processes until you find the right balance between compliance with organizational standards and the necessity to think of service delivery in a new way. Embrace the concept of chargebacks for cloud services. Empower departments to provision (and pay for!) resources as they see fit.
The IT operations team will still have to play a role in maintaining these systems (see point No. 4 below), but you can ease the burden by encouraging a decentralized self-service culture. Cloud computing may be met with great excitement within your organization, but without setting expectations properly, you may struggle to deliver services in the way users hope for.
How can cloud providers help? Consider asking them for case studies on how other customers have dealt with the change management aspect of cloud programs. Make sure that your provider has the ability to deliver per-department invoices and billing so that you don't incur extra overhead parsing a single invoice and trying to dole out expenses.
No. 2: Cost Of Educating Staff
Cloud computing is truly a new model of planning and consuming technology resources, and you'll likely buy these resources from a provider that's not already entrenched in the IT landscape. While there may be resistance to this model by those whose roles will change as a result, the vast majority of cloud initiatives are led by IT organizations, and they want those efforts to succeed.
Don't underestimate the cost of retraining your technical staffers. They may have to learn a new platform that looks and feels like nothing in the data center today. Operations and architecture teams must learn and apply key deployment patterns that are vital to pushing highly available systems to the cloud. Senior staffers should all be trained to recognize the scenarios where "cloud" is the best fit so that they only deploy applications that can add value by running in the cloud.
It's very likely that staff assignments will change, as there's less of a need for physical infrastructure experts and "assembly line" server builders who only do one piece of the provisioning process. All of this means that to create a higher probability of success for your cloud program, you must plan a comprehensive training effort that targets each affected party.
How can you keep the cost of planning and training down and not paralyze your staff in the run-up to your cloud deployment? Find eager members of the architecture, development, operations, project management and business analysis teams and form a small team to evangelize their knowledge to the rest of the organization.
Focus heavily on "gatekeeper" roles such as architecture and operations so that they can keep unsuitable applications from ever reaching the cloud. Have the architecture team revamp existing reference architecture models so that each department can see where cloud environments fit in the overall IT landscape. Finally, make sure that operations, architecture and development teams are trained and ready for the new reality of security, data storage and integration in the cloud.
Cloud providers can help reduce this human cost. Check to see if your provider has an extensive set of whitepapers on how its cloud works. See if it has a professional services organization that can do training for specific roles. And while it may not seem important, verify that your cloud provider provides a logical, well-organized user interface, which will go a long way to reducing the amount of upfront training needed and ease the transition from the existing, familiar toolset.
No. 3: Cost Of Migration
Whether you're planning to migrate existing workloads to the cloud or use the cloud for net-new environments, there's a human cost in setting things up.
It's not trivial to move applications from your data center to the cloud. Analyze your IT landscape for suitable migration candidates; prepare those applications by either refactoring or rebuilding them; load those applications into the new environment; integrate them with the on-premises infrastructure; run both environments in parallel for a validation period; and sunset the on-premises environment. Each of those steps involves a number of cross-functional teams, so coordination is critical.
Even if you don't plan to move any existing applications to the cloud, you must still extend and migrate your existing architecture to the cloud. Consider identity management. New (internal-facing) systems must be aware of the user accessing the system without requiring an entirely new authentication scheme. This means that you will want to extend your identity infrastructure to the cloud to create a seamless experience for end users. To have a truly integrated portfolio -- regardless of where the application is hosted -- you must extend your infrastructure perimeter to the cloud. Your IT operations team will have to spend a fair amount of time planning and implementing this integration layer.
Make migration and integration a core part of your planning discussions. Look for obvious migration candidates, including lightly modified commercial packages such as Microsoft Exchange and SharePoint, service-oriented Web applications and applications with bursty, unpredictable usage. Don't waste time trying to retrofit monolithic commercial software, or systems with a web of connections to internal systems.
Establish a cohesive plan for how your core infrastructure components -- identity, networks, data and applications -- will be exposed to the cloud. Choose a non-mission-critical application as a trial balloon.
Look for a provider with a software catalog that lets you easily load your virtual machines and custom applications onto cloud servers. Look for guidance on all the ways you can (and can't!) create integration points between the cloud and your own data center. Work with a professional services group to plan the cutover procedures and minimize disruption to end users. All of these activities will reduce the toll on your staff while preventing trial-and-error migrations.
No. 4: Cost Of Maintenance
Estimates show that at least 70% of IT budgets go to maintenance of existing systems. That percentage may not change dramatically by using cloud technologies, and it could even go up if you don't have the automation to handle the influx of new resources. A successful cloud program will lead to requests for more environments (see point No. 1 above) and will support the construction of new types of applications. But can IT handle that?
What would happen if your organization doubled its server footprint tomorrow? Surveys show that server-to-admin ratios range from 50:1 to 300:1 in a typical enterprise data center. Management of those servers includes installing software, patching, performing security scans and integrating with networks and other systems.
Management becomes more daunting as servers get added -- and deleted -- each hour. By adding cloud servers, IT pros now have to maintain server templates, keep configuration management systems up to date and keep an elastic pool of servers secure and running smoothly. Given that server patching is still one of the most painful and time-consuming activities (because of testing and the inevitable reboots), adding more servers can cripple an organization that doesn't embrace automation.
The only way to truly succeed in the cloud on a large scale is to aggressively identify ways to automate server provisioning, scaling, patching, updating and retiring. Use commercial tools and scripting engines to eliminate manual tasks wherever possible.
Cloud providers offer a range of solutions. Some let you set global security, monitoring and usage policies that cascade to all users. Look for clouds that make it easy to scale servers (automatically) based on utilization, thus saving you the human effort of monitoring and manually resizing servers. Find a provider that makes it easy to schedule maintenance and perform bulk actions against sets of servers. See if you can offload time-consuming aspects of server management -- like patching -- to the cloud provider's managed services team. A cloud provider that embraces automation is a cloud provider that will keep your human cost under control.

Twitter Expands Context With Related Headline Links

Twitter has made a name for itself in the world of breaking news. Now the microblogging platform wants to add more context to these newsworthy tweets with a new feature it announced Monday called Related Headlines. You can find this new section only on the Twitter website -- specifically, on the permalink page of tweets that have been embedded in websites. Clicking the "details" button or the tweet's timestamp will show you a list of websites that have embedded the tweet in an article. When a tweet is embedded, it's displayed with expanded media such as photos, videos and article summaries and includes real-time retweet and favorite counts.
Twitter's Brian Ellin said in a post on Twitter's developer blog that this feature aims to help users discover additional information about the tweet and is a value-add for publishers.
"When you embed tweets in content, the headline of your article and Twitter account will be surfaced on the tweet's permalink page for all to see," he said. "We think this will help more people discover the larger story behind the tweet, drive clicks to your articles and help grow your audience on Twitter."
For example, when Asiana Flight 214 crash-landed at San Francisco International Airport in July, news outlets from across the globe embedded the image of the plane engulfed in smoke that a passenger posted on Twitter. Now you can now find the articles that embedded that tweet on its permalink page.
Most recently, Facebook announced that it was introducing embedded posts, starting with a handful of publishers. Twitter, which is steps ahead of Facebook with this feature, will likely be the favored resource since news outlets now have the opportunity to gain additional referral traffic as a result of being placed in the Related Headlines" section.
While all Twitter users can view this feature, not everyone will be included in the Related Headlines section. Twitter said that publishers who are already using embedded tweets will be in the first group to have their article headlines surfaced. It will add additional publishers in the coming weeks.
To be eligible for Twitter headlines, you need to become an approved publisher. You can be considered by submitting an application using this form. You must be logged into your Twitter account to access it.
This update is the latest in a handful of new changes the microblogging platform has rolled out. Last month Twitter announced improvements to search results and to user interaction on Twitter for iPhone and Twitter for Mac. Most recently, it unveiled the next version of its two-factor authentication system, which the company said is more secure.

Outlook.com gets improved alias management


Now that Outlook.com is actually working, Microsoft can redirect its focus on improving the service rather than fixing it. Today, the tech-giant announces that they have improved the management of aliases on the web-based email service. "Several years ago we launched the ability to rename or to add aliases to your account, which gave important flexibility to manage these changes. But we found that these tasks were a little too monolithic. For example, sometimes you wanted to sign in with one alias but use another to send mail or display on your Xbox. So we started working to break these tasks down to give you more flexibility", says Eric Doerr, Group Program Manager for Microsoft account.
The company further explains that "we're replacing rename with a simpler and clearer choice to make any of your aliases a primary alias. We've also made it clearer why you might want to do this (and why you might not). Now that you can sign in with any alias, really the only reason to make a different alias your primary one is if you want a different email name to show on your Microsoft devices, like the background of your Xbox or your Windows 8 PC".I am actually a big fan of the Outlook.com service, so I eagerly went to the settings page to try it out. I was very impressed with how easy it was; not only to add an alias, but to change which email address was primary as well. I can choose on the fly as to which email address to display on my Microsoft devices -- very helpful!
Other than the short-lived outage that Outlook.com recently experienced, I think the service should be applauded for continuing to offer a well-designed ever-improving web-based email experience. I wish the same could be said about Gmail; between the ever-increasing advertisements and clunky tabs, Google's service is sinking fast.

Golden Calls: Will China embrace a champagne iPhone?


BEIJING: If Apple hopes to woo more Chinese by adding a glitzy coating - some call it champagne, some gold - to its next iPhone, it may be in for a surprise.

While gold is hugely popular as a safe haven and a status symbol - China is set to overtake India as the world's biggest gold consumer this year - shoppers at an Apple store in Beijing weren't all convinced it should be coupled with that pinnacle of mobile gadgetry.

Ni Suyang, a 49-year old worker at a Beijing state-owned enterprise, said that colour mattered less to her than the glass surface and silver metallic finish.

"A gold colour looks high-end but is a little tacky," she said.

Gold and mobile phones are not strangers. Britain's Gold & Co makes gold-plated iPhones, iPads and BlackBerrys which it also sells in India and China.

In Shenzhen many small local brands make gold-plated feature phones and smartphones. The less well-heeled can adorn their devices with jewel-studded and gold phone covers.

Apple's decision to add a champagne or gold covered iPhone to its range - confirmed by supply chain sources in Taiwan - would be a departure from its black and white norm.

Apple could be not reached for comment.

Commercially it makes sense, said Jerry Zou, Senior VP and Partner at FleishmanHillard, a public relations firm in Beijing. New colours would add "novelty and variety, both of which are key to winning over fickle Chinese consumers".

A champagne colour "would convey an image suggesting high-end luxury but a bit more restrained and subtle".

ALL THAT GLITTERS...

But browsers at Apple's Xidan store weren't so sure - even on which gender would like it.

"Gold is for guys, I think," said 22-year old Meng Xiang, a retail buyer working in Guangzhou, who said she preferred pink and white. "I would consider buying a gold iPhone for my boyfriend."

Cui Baocheng, a 48-year-old bank manager, disagreed. "I prefer black to gold," he said. "Men usually like black. Champagne might be very ugly."

Indeed, there's a danger that by trying to broaden its appeal Apple may end up undermining what makes the iPhone so desirable in the first place.

Younger Chinese see gold as old-fashioned and tacky, and are increasingly opting for platinum - dubbed "white gold" in Chinese - for weddings and gifts.

"An iPhone with more colours means that Apple is adapting to consumers' tastes, especially a gold colour that Chinese people like," said Xu Fang, a 28-year old real estate agent. "However, I think this might undermine the value and uniqueness of the brand."

Apple's sales in Greater China, its second biggest market, slumped 43 per cent in April-June from the previous quarter. Its market share has almost halved since last year to below 5 per cent, according to industry researcher Canalys.

The bigger problem, says Shanghai-based product designer Brandon Edwards, is that while gold added "cultural relevance on top of Apple's inherent brand value" and may attract premium users from other brands, "Apple's main issue in China and emerging markets is centred around acquiring new customers, and this doesn't hit those people at all".

Indeed, consumers in India, where Apple's market share is just over 2 per cent, were just as sceptical. Mumbai phone retailer Manish Khatri said he did occasionally get customers asking for gold-coloured phones, but the biggest deterrent to buying an iPhone for most of them was cost.

For others, gold is something to buy, not to slap on a mobile device.

Said Vikas Jindal, a 35-year old Delhi businessman and a regular buyer of gold: "I'll look stupid if I carry a gold-coloured phone. A phone should be simple and sober. "

Gotta Access Linux Files? Paragon ExtFS Delivers the Goods






If you run a set of multiple operating systems that includes Linux, chances are you've run into situations where you needed to get at some of your Linux files while running something else. Paragon ExtFS grants full access to Linux partitions while you run the Windows platform. Its UFSD technology also provides full read/write access as well as format control to volumes of the most popular file systems.


It grants me full access to Linux partitions while running the Windows platform. Paragon's UFSD technology also provides full read/write access as well as format control to volumes of the most popular file systems. These include NTFS, FAT, Ext2/Ext3/Ext4/ and 3FS. ExtFS has versions for Android, Windows, Mac, Linux and DOS.
The ability to cross over partitions from mobile devices and more traditional office hardware that normally do not let you see other volumes is a huge benefit. Until I found ExtFS, I had to work with several clever workarounds. Being able to run the Windows OS and access documents stored in a Linux volume on that same hard drive is a handy productivity booster.


Paragon's ExtFS displays its program window (right) and the list of Windows-accessible volumes (left) on the Windows desktop before the Linux volume is mounted.
You get full read and write access to Linux-formatted partitions. This simplifies data sharing. It also gives you the ability to transfer files among otherwise incompatible systems with the ease of using the OS's already-present file manager tools.
<script language="JavaScript" type="text/javascript"><!--//<![CDATA[ document.write('<a href="http://www.ectnews.com/adsys/link/?crid=8496&ENN_rnd=13771481212043" target="_blank" ><img src="http://view.atdmt.com/MRT/view/452068057/direct;wi.300;hi.250/01/?ENN_rnd=13771481212043" /></a>');//]]>//--></script> <noscript><a href="http://clk.atdmt.com/MRT/go/452068057/direct;wi.300;hi.250/01/" target="_blank"><img border="0" src="http://view.atdmt.com/MRT/view/452068057/direct;wi.300;hi.250/01/?ENN_rnd=13771481212043" /></a></noscript>

Speedy Performance

Paragon released the Windows version of its free ExtFS tool last month. It is a plug-in for Dokan -- a file system for Windows.
Paragon's driver technology works with almost all devices on the network. If you play and work with Windows and Linux devices, ExtFS could become an essential must-have application.
I have used it to move data and share files from tablets and smartphones to servers, PCs and workstations regardless of their operating system for communication and file-sharing.
I have been pleased by its speedy performance as well. Since it allows me to use existing Windows file-managing applications, I have not noticed any slow-down in transfer speeds. It is very seamless.

Multiple Uses

For the purposes of this review, I am focusing on ExtFS for Windows. Similar free downloads are available from the Paragon website for applications that let you access Mac, Linux and Android platforms from a running OS besides Windows.
For my purposes, ExtFS for Windows fills a need in getting easy access to my Linux volume from the Windows side of computers I use with dual-boot configurations.
I settled long ago on a workaround that lets me read and write files on the Windows volume from the Linux side of the hard drive.

Getting It

Clicking on the download button on the Paragon website brings you to a quick registration form. It is painless and asks you to create a user name and email address for user verification.
Check your email for a follow-up confirmation link. Then check again for a download link.
Installation of ExtFS is routine as per the Windows platform. Confirm that you want to run the installer. When the process completes, an icon will sit on your Windows desktop.
Be sure to reboot your computer after installation completes. I got some balky response on some computers when I tried to use ExtFS without first rebooting.
Paragon's application has very broad system requirements. It worked fine on my hardware running Windows XP, Windows Vista, Windows 7 and Windows 8.

Using It

As I said earlier, I spend most of my computing time in various Linux distros and the Linux-based Android OS on my smartphone and tablets. When I needed a convenient bridge from the Windows OS to my Linux volume on a computer, however, ExtFS has been a priceless -- albeit free -- addition to my computing tools.
Click on the ExtFS icon on the Windows desktop. That opens a small window showing the volumes on the hard drive. The list will include a highlighted entry for the unmounted Linux volume.
Click on this Linux volume entry, and then click the "Mount" button on the bottom of the ExtFS for Windows window. A drive letter selection window will pop up. Scroll through the available drive letters to assign the Linux volume a Windows Drive designation.
Do not worry. This procedure does not alter anything in the Linux volume. It merely accommodates the Windows file system. Once the Linux volume is mounted, the button changes to "Unmount."
You can now open a Windows file manager, and you will see the Linux volume included in the display. You can access, read, write, delete, copy and move files among directories on the Windows side and the Linux side of the hard drive.

The Limitations

Let's be clear about using ExtFS. It is not a virtual machine component that will let you run the Linux OS from within Microsoft Windows.
Similarly, ExtFS is not a reverse Wine application. You can not run a Linux program in a special window from inside the Microsoft platform.
What you can do, however, is open documents stored in the Linux volume with compatible Windows programs. You can also perform standard file management tasks on any file stored in the Linux volume.
Be careful! Since Linux is not running, the normal Linux file permissions do not exist, so you can possibly mess up your Linux installation should you dabble in folders where you do not belong.

Bottom Line

Paragon's ExtFS is a new tool for Linux users. It is reliable and does the job it was designed to handle.
ExtFS is not for every Linux user, but it will make your computing life much easier if you must coexist between the Linux and Microsoft Windows worlds on the same computer.


Dell Foglight vanquishes zombie VMs

With an update to its Foglight for Virtualization software package, Dell can now help organizations rid their systems of resource-sucking zombie virtual machines.
"It's so easy to create VMs. We have customers creating thousands and thousands of them. But what are the lifecycles of these VMs? In these larger environments, [administrators] don't know if they are being used," said John Maxwell, Dell vice president of product management.
Foglight for Virtualization Enterprise Edition 7.0 will also support the latest versions of VMware's virtualization products.
Dell plans to demonstrate the software's new capabilities at the VMworld conference next week in San Francisco, along with a newly updated Foglight for Virtualization Standard Edition (which is a separate product entirely from the enterprise edition) and Foglight for Storage.
Formerly called Quest vFoglight Pro, Foglight for Virtualization 7.0 Enterprise Edition is part of the Foglight family of software programs for easing and automating system administration tasks. Dell purchased Quest Software in 2012.
Foglight for Virtualization provides a set of utilities for working managing virtual machines running on VMware, Red Hat, or Microsoft virtualization platforms.
One significant new feature is the ability to clean up virtual machines that are no longer being used in VMware environments, but still reside on the system somewhere.
The software can now recognize a wide range of purposeless virtual machines that hide on VMware's infrastructure and even delete them on the administrator's behalf.
It can identify what Maxwell calls zombie VMs, for instance. These are VMs that continue to run though do not appear on VMware vCenter console. In some cases, these are VMs that an administrator might have delete a VMware definition from vCenter, thinking this would delete the VM itself. In some cases, these rogue VMs could even be surreptitiously installed on systems by malicious attackers.
Foglight compares vCenter's manifest of the VMs that are supposed to be running with a list of VMs it creates that are actually running, highlighting those that are not identified by vCenter.
Another category of shiftless VMs are those abandoned images and outdated VM backup snapshots that reside dormant in storage. "We've run into sites where VMs haven't been powered on for years, but they still take up storage," Maxwell said.
The new Foglight also can do what Maxwell called "rightsizing." The software can examine the actual resources a VM is consuming -- such as the allocated CPU, memory or disk -- and offer suggestions about more efficiently provision for that VM. It can even reprovision the resources itself.
The software also includes a number of other updates. It now supports the latest versions of VMware vSphere and vCloud Director. Working with VMware View, it now provides end-to-end visibility to virtual desktop infrastructure (VDI).
Version 3 of Foglight for Storage Management will feature pool-level analysis for those tracking capacity for thin provisioning, as well as a performance analyzer that allows the administrator to click through the VM statistics down to the storage array and even to individual nodes within the storage array. In addition, it now supports Dell Compellent, Dell EqualLogic, and EMC VMAX arrays.
Foglight for Virtualization Standard Edition, which the company markets to small and midsized businesses, now comes with improved capacity management and planning, and a power minimization feature that can examine workloads and recommend the least number of servers needed.
Foglight for Virtualization, Enterprise Edition 7.0 will cost $799 per physical socket. Foglight for Storage Management 3.0 will cost $499 per socket and Foglight for Virtualization Standard Edition 7.0 will cost $399 per physical socket. All of these products will be available on or around Aug. 31.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Rackspace hosts VMware management with new dedicated server

The introduction of Rackspace's hosted Dedicated VMware vCenter Server will allow IT staff to control their VMware environments from a data center run by the vendor.
As enterprises move IT infrastructure out of their own data centers, vendors are offering a growing list of alternatives that in Rackspace's world includes hybrid clouds, which combine public and private clouds, and dedicated hosting. The latest addition to the latter offering is Dedicated VMware vCenter Server, which allows IT departments to retain full control with the tools they are used to without having to bother with the underlying infrastructure.
"Many of our customers have large VMware installations in-house today, and they have made significant investments in that and don't want to throw it away," said Andrew Wing, senior product manager at Rackspace.
For companies that want to stick with VMware, but don't want to expand their data centers or have data centers at all, Rackspace already offers managed virtualization based on VMware vSphere 5.1. The addition of vCenter builds on that. From a single console, administrators can control virtual servers running in-house and in Rackspace's data centers. It is also possible to mix vCenter servers that run in an enterprise's own data center and ones that are hosted by Rackspace.
Until now, Rackspace's hosted VMware platform used a shared vCenter model, which only offered limited management capabilities, according to Wing.
Rackspace will at first charge a yet undecided monthly fee per hypervisor that is managed by the hosted vCenter server and in the next phase move to a model where enterprises pay for what they use.
"It is just a question of getting our billing systems set up to make that a reality," Wing said.
Today, enterprises are using Rackspace's managed virtualization to run things like e-commerce platforms and Web content management systems. But Rackspace is hoping to attract more central IT functions, including ERP and CRM systems, according to Wing.
Send news tips and comments to mikael_ricknas@idg.com

SAP takes the fight to Salesforce.com, Oracle with social intelligence app

Many companies have begun using specialized software to analyze what people are saying about their products and services on social media, and now SAP says it can help them match up individuals' social profiles with customer history data from CRM (customer relationship management) systems.
Dubbed Social Contact Intelligence, the application can help sales and marketing staff find better leads for sales as well as gain more knowledge about their actual customers' likes and dislikes, according to SAP.
Social Contact Intelligence is built on top of and dependent on HANA, SAP's in-memory database platform. It's part of a broader suite, Customer Engagement Intelligence, that is now in "ramp-up," SAP's term for an initial release with a small set of customers. Currently it's only offered on-premises, but SAP is considering cloud-based deployments for the future, according to a spokeswoman.
Core CRM software is "such a commodity it's almost relegated to a system of record," said Jamie Anderson, vice president of customer solution marketing. Thanks to the rise of social media and resulting changes to the way customers interact with companies and make buying decisions, "we've realized the CRM market is evolving faster than CRM products on their own."
SAP had already been reselling software from Netbase for social media analytics, but now the Contact Intelligence product brings internal customer data to the equation, he said.
The three other elements of SAP's Customer Intelligence Engagement suite include Audience Discovery and Targeting, for running segmented marketing campaigns; Customer Value Intelligence, which gives recommendations on ways to cross and up-sell products to clients; and Account Intelligence, a mobile application for sales representatives.
Tuesday's announcement comes after SAP's unveiling in November of yet another social CRM-themed product set called 360 Customer, which ties together HANA, CRM, social analytics from Netbase and the Jam social network.
Oracle, Salesforce.com and other vendors are also moving quickly to build out social software portfolios, seeing the market as a major opportunity to sell existing customers more software when they have little interest or need to expand their core CRM system.
The competitive climate can put customers at a disadvantage, according to a recent Forrester Research report.
"Decoding and navigating the crowded social technology vendor landscape isn't easy," wrote analysts Nate Elliott and Zach Hofer-Shall. "Most vendors offer a unique range of social technologies, but no single vendor covers the entire value chain. Meanwhile, buzzword-packed marketing materials make it difficult to differentiate the players and find the right fit."
The level of emphasis and investment that companies should place on social software investments depends on their size, according to another recent Forrester report.
Immature companies should start small, analysts Allison Smith and Carlton Doty wrote: "Track down a high-impact use case, find a listening platform partner who can guide you, and experiment. This is an iterative, test-and-learn kind of process."

Companies in a medium stage of growth should not "settle for 'good enough'" from a vendor and must avoid signing more than a one-year deal, they added. "With limited exceptions, social listening platforms are easy to replace -- and if yours is holding you back, get rid of it."
When no single best platform is targeted after soliciting bids from vendors, "many companies opt to create a Frankenstein's monster combination of multiple platforms," they wrote. "This approach is cumbersome and pricey, but a necessary evil unless such firms are willing to simplify some business requirements."
Meanwhile, mature companies should "prepare to listen on a larger scale," according to Smith and Doty. "Developing into a fully integrated social intelligence practice will give you the skills to take your listening to the next level -- outside of social," they wrote. "Your customers engage with you across channels and in unstructured, nonlinear ways. They also provide feedback in traditional channels like surveys, in the call center, and in web-based self-service functions."
Chris Kanaracus covers enterprise software and general technology breaking news for The IDG News Service. Chris' email address is Chris_Kanaracus@idg.com

HP equips WorkSite with file-sharing service

Hewlett-Packard has launched a file storage service for users of its Autonomy WorkSite document management software that it promises can be more helpful than consumer-focused hosted file services.
"With consumer-grade services, you can't govern what's out there and often you are not sure about security," said Dan Carmel, who is the head of enterprise content management strategy and solutions for HP Autonomy.
The LinkSite service synchronizes files on an internal WorkSite deployment with an HP file storage repository accessible from the Internet, making internal files available from outside the corporate firewall. All files inherit their read and write permissions from their in-house counterparts.
WorkSite is HP Autonomy's document management suite, which can be used to index and store corporate files. Autonomy acquired the company that created WorkSite, Interwoven, in 2009. Autonomy itself was acquired by HP in 2011.
HP is pitching that this new cloud companion service can provide a superior file hosting to popular consumer-focused services, because it offers more security, auditing and control for system administrators.
The company also asserts that LinkSite has advantages over other enterprise-focused file-sharing services -- such as Citrix ShareFile, or Novell Filr -- in that, at least for users of WorkSite, it is integrated with an existing content management system, so an administrator does not need to set up separate sets of policies for a new hosted file service. Employees also don't have to learn a new interface.
Users can access LinkSite through any browser that supports HTML5 markup, as well as through specific apps for Apple iOS and Android devices. For administrators, LinkSite could audit who creates, modifies or deletes files. They also get a dashboard summarizing usage statistics.
The service is built on HP Flow CM, a hosted content management service HP launched last year for storing online scans made by the company's multifunctional printers. LinkSite runs on HP Cloud Services data centers located in the U.S.
The service transfers files through the HTTPS (Hypertext Transfer Protocol Secure) protocol. Carmel declined to elaborate on any policies that HP has on working with government intelligence agencies, in terms of disclosing or withholding customer data, other than to note HP follows standard industry and legal procedures for dealing with such situations as they arise.
List price starts at $19.95 per month for each WorkSite-licensed user, and prices decrease with volume purchases. There is no charge for external users. Each account gets 1GB of storage, and there is no charge for bandwidth use.
The service will go live Sept. 15.
Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

DIY dev: Should you build your own app?

Custom Web and mobile apps, once the exclusive purview of large companies with vast resources, have become a common hallmark of successful small and midsized businesses. Externally, apps can offer deeper engagement with customers through online and mobile access to useful tools and information. Internally, they can help workers communicate more effectively with highly customized real-time data on their desktop and mobile screens. But reaching the promised land of apps and money can be daunting, and not every app development adventure ends in success.
If you want to put a custom app to work for your business, you'll first need to make one critical decision: Should you outsource it, or try coding it in-house? This decision is so fundamental that many people overlook it entirely without even realizing they have options here. And failing to consider it carefully can cost your company dearly in both opportunities and money. We'll examine some of the most significant factors to help you make the best decision for your business, and give you a sense of what you'll experience if you decide to go the DIY route.
Does your business really need a completely new app, or will you get more benefit from an existing package that can be tweaked to meet your needs? The answer to this question depends largely on what you're looking to accomplish. If, for instance, you just want to add blogging or social media feeds to your website, you can get the job done with any number of free, easy-to-configure options. Or if your goal is to improve internal communication about customer accounts, you'd almost certainly be better served by a proven customer relationship management (CRM) package than by a homegrown database app. If, however, your idea is more novel, like, oh, let's say you wanted to crank up customer engagement at your landscaping business with an app that lets customers submit sketches and pictures of their yards, then you'll probably have to go totally custom (or at least build your solution out of a variety of existing components).
Do you need your app to work on a specific platform, such as iOS, Android, or Windows? Do you need cross-platform functionality? The question of whether to create a native app that runs on a specific platform or a responsive Web app that can run on any device with a browser shouldn't be taken lightly. The answer will depend on how the app will be used, as well as your budget and development time. Native apps typically require more development time than Web apps, and if you need the app to work on multiple platforms, going the native route can double or triple your up-front costs and add considerable expense and complexity to the maintenance of your apps.
Naturally, the importance of the app and the data it will manage are essential considerations, too. Regardless of whether your app is totally original or built from an off-the-shelf platform, apps that will handle sensitive data or that are critical to the functioning of the business can demand a different level of investment and diligence than apps that are merely nice to have. For sensitive, business-critical apps, it's usually advisable to put development in the hands of the most proven, experienced dev team you can get to ensure the data is handled securely and the app is optimized for performance and reliability -- for the overwhelming majority small and mid-sized companies, that means an outsourced team.

Time is also key factor: Do you need this app launched with full functionality within a few weeks, or can you afford to give it more time, launch with a small set of features, and then iterate? If you need the whole thing working at launch time and don't have months to spare, outsourcing is likely again the best option. But bear in mind that even outsourcing development can take longer than many people expect. Development timelines are one of the most contentious issues between dev consultants and their clients, and business people are often shocked to discover that what they thought would be a quick, simple process is bound to take several weeks longer and many thousands of dollars more in development costs than they expected.
The pros and cons of outsourcing development
Handing your dev work over to an outside contractor can seem like a no-brainer for a business with substantial resource constraints. And in truth, there are plenty of benefits to relying on outside talent. But there are also some downsides to entrusting important dev work to outsiders. Here's a quick list of pros and cons.
The biggest benefit you'll get from an outside dev contractor is peace of mind. Assuming you vet your contractor thoroughly beforehand and scrutinize their references and examples of past work, you can enjoy some assurance that you're dealing with someone with a proven track record of shipping high quality apps in a timely fashion. Experienced dev consultants can counsel you to avoid potential pitfalls, narrow the scope of your app's functionality in ways that will benefit your users (the importance of this cannot be overestimated), and plot a development strategy to build the best possible app for your needs.
Experienced developers will also save you a lot of time on the front end. Not only will they most likely already have the required technical knowledge to build your app, but they'll probably also have already built something similar. So you can get often get your app up and running within several weeks rather than several months.
At the same time, these benefits often come with some potentially frustrating complications. Experienced developers are in high demand everywhere, and they command high consulting fees. Those fees don't stop once the app ships, either, since bug fixes and security updates will require additional work on an ongoing basis. There's no such thing as a totally finished, bug-free app. Budget accordingly, and hope your contractor sticks around to help with the maintenance.
Communicating with outside development teams can be surprisingly challenging for many in the business world. Business people often underestimate the complexity that goes into coding even a relatively simple app, and software developers often lack important insight into a client's business structure, team dynamics, and objectives. Miscommunication with an outside dev team can cost you time and money, so be sure to appoint a talented project manager from within your organization to ensure smooth communication with the developer.
Be especially vigilant to resist what developers call "scope creep." This potentially project-wrecking phenomenon occurs when clients fail to express important feature requests early in the process, assume that a common or popular feature will be included even though it hasn't been explicitly discussed, or decide they want to add new functionality after development has begun. Adding new features changes the scope of the project, and that typically translates to more money and time. Because of this, misunderstandings over project scope can damage relationships between clients and developers in ways that ultimately undermine the quality of the final product.

The best way to avoid scope creep is to diligently itemize every feature you want to see in the final product before any work begins. If you want to be able to search your customer database by hair color, make sure that feature is clearly accounted for in writing early on. It's also worth heeding your developer's advice on this front: Experienced developers can help you spot extraneous feature requests early in the process, and you may be grateful for the resulting savings in time, dev costs, and reduced complexity, even if it means your pet feature has to die in the womb.
Perhaps the biggest downside to outsourcing is the missed opportunity to build institutional knowledge about the app itself. As data becomes increasingly critical to business operations, it's good to have staff on hand who really know the ins and outs of the software that drives the business. Custom apps built by outside contractors present a unique problem for IT teams, because ongoing support generally depends on the availability and affordability of the contractor who built the software, and switching developers can be difficult and costly for highly customized apps.
The DIY approach
Tackling Web and app development in-house can offer some distinct advantages over outsourcing, but there are significant barriers to entry and some ongoing challenges as well. Here's a short list of the most striking pros and cons.
It's worth noting here that, for the time being, I'm deliberately ignoring the prospect of just going out and hiring an experienced programmer to join your team. Doing so can, obviously, help overcome many of the cons on this list, but it also comes at considerably greater expense than either of the options we're discussing in this article, since experienced programmers command salaries well above the six-figure mark.
The most urgent fact to focus on when considering cultivating in-house development talent is that the learning curve for software development can be immense. It's not excessively trite to say that some people just aren't cut out for it. Learning to code requires a highly analytical mind with a penchant for abstract thinking and incredible attention to detail, as well as a serious investment of time and mental focus. Those who succeed at it generally bring a single-mindedness to the work that eclipses their ability to focus on other things. (And make no mistake: Tackling custom app development will absolutely limit the candidate's ability to work on other projects.)
DIY development only really makes sense when you've got someone on your team whose background and interests already align with the task. Good prospects include IT staffers who've built some websites already or done some software customization or scripting in the past, or otherwise highly analytical types who enjoy delving into deep, thorny tasks and don't lose their cool in the face of apparently unsolvable problems. (Apparently unsolvable problems are a daily occurrence in the life of a developer, and the thrill of solving them is like crack to a good coder. So pick a tenacious geek and give him or her plenty of support and latitude to learn, explore, and create.)
A warning to sole proprietors and very small companies: Programming is massively time consuming, and development projects often require way more time and resources than we estimate on the front end (even for professional dev teams). If you can't afford to devote a full-time worker to nothing but programming for several months, you're better off outsourcing the project. If you're a sole proprietor thinking about doing your own dev work on the side, be prepared to sacrifice all of your nights and weekends for several months just to get started. I'm not saying it won't be worth it, but if you've got a family or a social life, you're in for quite a shock.
While it may be fair to say there's no right answer here, one approach clearly brings more risk than the other. Training and cultivating a new developer from within your own ranks is a tough proposition that offers no guarantee of a great app at the end of the process.
However, the cost of that risk can be surprisingly low in comparison to the cost of a development contractor. Sending a $60,000-a-year IT worker to dev training and giving him or her a few months to create an app can set you back less than $30,000. By contrast, a single fully customized, data-driven app built by an outside consultancy can easily cost you $30,000 to $50,000.
So if you've got time on your side and you'd like to build up some development talent within your organization, investing in in-house development resources can be a smart move over the long run. If, however, you need the assurance of rock-solid reliability and security within a shorter production cycle, you're better off enlisting the help of a proven development contractor.

www.All909.blogspot.com


 
Support : www.All909.Tk | Rehan Ahmed=>ALL909
Copyright © 2012. All909 Updates - All Rights Reserved
Template Modify by Creating By Inspired Wordpress
Proudly powered by Google