Search results (127)
Conclusion: A product line engineering approach to digital service development and operation can unlock significant value if due diligence is applied when identifying product line stakeholders and product line scope. A successful product line is one that enables all stakeholders to apply their unique expertise within the context of the product line at exactly those points in time when their knowledge and insights are required as part of the organisational decision making process. Good product line architectures align human expertise, organisational structure, business processes, software system capabilities, and value chain integration1 with customers and suppliers.
Conclusion: The digitisation of service delivery in the finance, insurance, and government sectors means that all organisations in these sectors are now in the business of developing, maintaining, and operating software products for millions of users, with profound implications for organisational structures1, business architectures2, and the approach3 to service development and operation. Whilst internal business support functions can usually be addressed via off-the-shelf software, with very few exceptions, the functionality of customer facing services can’t be sourced off-the-shelf.
Conclusion: The popularity and growth of online social media platforms has pushed social data into the spotlight. Humans using the Web mainly interact with human-produced data. Yet the floods of machine-generated data that flow through the Internet remain invisible to humans. For a number of reasons attempts by organisations to mine big social data to improve marketing and to increase sales will fall significantly short of expectations. Data from digital devices and sensor networks that are part of the Internet of Things is eclipsing human produced data. Machines have replaced humans as the most social species on the planet, and this must inform the approach to data science and the development of healthy economic ecosystems.
Conclusion: When implementing enterprise Cloud services, a disciplined and locally distributed approach to user acceptance testing in combination with real-time dashboards for test management and defect management can be used as the centrepiece of a highly scalable quality assurance framework. An effective quality assurance process can go a long way to minimise risks, and to ensure a timely and successful rollout.
Conclusion: The development of new digital services often entails not only changes to workflows but also changes to the business rules that must be enforced by software. Whilst vendors of business rule engine technology often market their products as powerful and highly generic tools, the best results are achieved when restricting the use of the different approaches to specific use cases.
Once upon a time there was a programmer who developed software, working for a software vendor, and there was a CEO, a CIO, and a sales executive who all worked for a manufacturing business. It was a happy time, where everyone knew who developed software, who bought software, who implemented software, and who used software. In this long-gone era businesses delivered physical goods and professional services, and software was a helpful tool to standardise business processes and automate tedious repetitive tasks. Those were the days where hardware was solid, software was easy to deal with (certainly not harder than dealing with a teenager) and humans were the masters of the universe.
Conclusion: In government organisations the potential for standardisation and process automation via the use of enterprise resource planning software is largely limited to internal administration. In terms of digital service development government organisations can optimise their IT budgets by understanding themselves as knowledge-transformation organisations rather than as consumers of off-the-shelf technology.
Conclusion: The operational model and associated processes of larger organisations in many sectors of the economy are encoded in software. Enterprise software from SAP plays a dominant role in many industries and significantly influences the terminologies and workflows used within organisations, in particular in those domains where SAP offers out-of-the-box solutions. The resulting level of standardisation has tangible advantages, but also represents an upper limit to the level of operational efficiency that is achievable. Organisations that rely on SAP are well advised to get independent advice to determine the optimal level of lock-in to SAP.
Conclusion: Organisations that fail to recognise the difference between information and knowledge are at risk of haemorrhaging knowledge at a rate that at the very least has a measurable impact on the quality of service delivered by the organisation. In the worst case, a loss of knowledge poses an existential threat to a product line or to the entire organisation. Whilst tools can play an important role in facilitating knowledge preservation, it is information sharing between individuals and teams that fuels the creation of knowledge.
Conclusion: Consumer-oriented software and online services are raising user expectations. To determine the aspects of user experience design, and the trade-offs that are appropriate in a particular business context, requires extensive collaboration across multiple disciplines. The cross-disciplinary nature of the work must be considered when evaluating external providers of user experience design services. References and case studies should be consulted to confirm cross disciplinary capabilities and the level of expertise in all relevant disciplines.
Conclusion: Enterprise software vendors and enterprise software users are increasingly investing in in functionality that is accessible from mobile devices, and many organisations face the challenge of making key legacy applications accessible on mobile devices. Comprehensive and reliable APIs are the key for the creation of architectures that enable a seamless user experience across a range of mobile devices, and across a backend mix of state of the art Cloud services and legacy systems.
Conclusion: The digitisation of services that used to be delivered manually puts the spotlight on user experience as human interactions are replaced with human to software interactions. Organisations that are intending to transition to digital service delivery must consider all the implications from a customer’s perspective. The larger the number of customers, the more preparation is required, and the higher the demands in terms of resilience and scalability of service delivery. Organisations that do not think beyond the business-as-usual scenario of service delivery may find that customer satisfaction ratings can plummet rapidly.
An SOA maturity assessment is the first step towards an integrated platform for delivering innovative digital services
Conclusion: Technology increasingly is a commodity that can be sourced externally. In contrast, trustworthy data has become a highly prized asset. Data storage can be outsourced, and even SOA (Service Oriented Architecture) technology can be sourced from the Cloud, but the patterns of data flow in a service-oriented architecture represent the unique digital DNA of an organisation – these patterns and the associated data structures represent the platform for the development of innovative digital services.
Conclusion: Machines are becoming increasingly proficient at tasks that, in the past, required human intelligence. Virtually all human domain expertise can be encoded in digital data with the right knowledge engineering tools. The bottleneck in understanding between humans and software is shaped by the ambiguities inherent in human communication, not by the challenge of developing machine intelligence. To benefit from big data, organisations need to articulate knowledge in the language of data, i.e. in a format that is not only understandable by humans but also actionable by machines.
Conclusion: As physical and digital supply chains become more integrated across organisational, regional, and national boundaries, the potential impact of an emergency or crisis can be far reaching. A proactive approach to crisis management requires an awareness of all the high-impact crisis and emergency events that could affect an organisation, and requires appropriate tools for risk assessment and active hazard management.
Over the last few years the talk about search engine optimisation has given way to hype about semantic search.
The challenge with semantics is always context. Any useful form of semantic search would have to consider the context of a given search request. At a minimum, the following context variables are relevant: industry, organisation, product line, scientific discipline, project, geography. When this context is known, a semantic search engine can realistically tackle the following use cases:
Conclusion: Over the last five years the market of crisis management and emergency response systems has undergone a rapid evolution. Innovative solutions exploit the proliferation of smart mobile devices, the continuously growing number of available data feeds, the simplicity of the deployment models afforded by the Web, and powerful geographic information system functionality. Given the maturity of some of the available solutions, it makes sense for larger organisations in the public sector and for utility organisations to consider the deployment of a modern crisis management and incident response system.
Conclusion: Difficulty in defining performance criteria for an enterprise architecture team typically points to a lack of clearly articulated business priorities, or to a lack of a meaningful baseline against which performance can be assessed. An enterprise architecture team needs to be given clear objectives that relate to the performance of the business, without being prescriptive in terms of the target IT system landscape.
Conclusion: Today, nearly all organisations are delivering digital services to customers and suppliers. Quality of service expectations of external stakeholders create significant challenges for organisations that were used to treating IT and software needs as internal topics that are at least one level removed from customers and suppliers. Digital services have evolved into the key mechanism for embedding an organisation into the external value chain. Articulating a clear conceptual picture of the external value chain in precise terms without using any IT jargon is a prerequisite for innovation and successful business transformation, having an IT strategy is no longer good enough.
Conclusion: Every business now operates in a context that includes the use of digital services. While the IT strategies of many organisations articulate a business case for technological innovation, they offer little guidance in terms of organisational patterns that enable and facilitate the delivery of useful and reliable digital services. Organisational structures must be adapted to meet the needs of the new world of digital service networks.
Conclusion: In a few years from now the Cloud services we use today will look as quaint as the highly static Web of 1997 in the rear view mirror. In the wake of the global financial crisis the hype around big data is still on the increase, and big data is perceived as the new oil of the economic engine. At the same time, many of the data management technologies underlying Cloud services are being consolidated, creating new kinds of risks that can only be addressed by the adoption of a different data architecture.
Data scientists are in hot demand. In December 2012 the Harvard Business Review featured an article titled “Data Scientist: The Sexiest Job of the 21st Century”. International online job boards and LinkedIn have many thousands of openings asking for big data skills, and a growing number of openings for data scientists. What is all the hype about?
Conclusion: Cloud infrastructure and platforms have started to alter the landscape of data storage and data processing. Software as a Service (SaaS) Customer Relationship Management (CRM) functionality such as Salesforce.com is considered best of breed, and even traditional vendors such as SAP are transitioning customers to SaaS solutions. The recent disclosure of global Cloud data mining by the US National Security Agency (NSA) has further fuelled concerns about industrial espionage in Europe and has significantly raised citizen awareness with respect to privacy and data custodianship. Any realistic attempt to address these concerns requires radical changes in data architectures and legislation.
Conclusion: Most organisations that use enterprise resource planning (ERP) software have a need to integrate the ERP system with other enterprise software. It is common for ERP systems to be integrated with customer relationship management software (CRM) and with all the bespoke applications that operate at the core of the business. Some organisations strive to simplify the system integration challenge with a single silver-bullet system integration technology, but this approach only works in the simplest scenarios, when the number of system interfaces is small. Instead, aiming for maintainable integration code leads to better results.
Conclusion: Many organisations in Australia rely on SAP software for enterprise resource planning (ERP) software. To get the best results out of their data, a significant number of organisations have implemented a data warehouse alongside operational systems, and are combining SAP software with best-of-breed technologies for customer relationship management and system integration. Whilst SAP software continues to provide important functionality, it pays to understand to what extent standardisation of ERP functionality makes economic sense, and from what point onwards standardisation reduces the organisation’s ability to deliver unique and valuable services. Standardisation is desirable only if it leads to a system landscape that is simpler and sufficiently resilient.
Conclusion: Government agencies are slow in implementing open public sector information in line with freedom of information requirements. Agencies are challenged in terms of awareness of related government policies, in terms of cross-disciplinary collaboration, and in terms of obtaining funding for open data initiatives. The implications are not limited to government, but also affect the ability of Australian businesses to develop innovative products that derive value from Big Data in the public domain.
Conclusion: Today organisations need to adapt swiftly to changes in their external environment. Brittleness and inflexibility are characteristic of complex systems that lack modularity and redundancy. Resilient systems offer an appropriate level of redundancy at all levels of abstraction: from replicated skill sets within organisational structures to physical redundancy of hardware. In other words, a simplistic focus on efficiency may introduce more risks than benefits.
The topic of Big Data has been propelled from the engine room of theWeb 2.0 giants into the mainstream press. Over the last decade, the volume of data that governments and financial institutions collect from citizens has been eclipsed by the data produced by individuals in terms of photos, videos, messages, as well as geolocation data on online social platforms and mobile phones, and also the data produced by large scale networks of sensors that monitor traffic,weather, and industrial systems.
IBRS has always recognised data as the key to value creation, and has built up an extensive body of research on the latest trends and the shift from enterprise data to “big data” that is currently unfolding. This white paper addresses the scale and the businessimplications of this shift.
Conclusion: The concept of service virtualisation is fundamental to the development of scalable service oriented architectures (SOA) and to the implementation of a DevOps approach to software change and operations. On the one hand service virtualisation enables the development of resilient high-availability systems, by enabling dynamic switching between different service instances that may be running on completely independent infrastructures. On the other hand, service virtualisation enables realistic integration tests of non-trivial Web service supply chains.
Conclusion: DevOps is a grassroots movement that is only a few years old but has quickly spread across the globe, and its influence is present in virtually all organisations that operate popular Cloud services. DevOps is a portmanteau of software system Development and Operations, referring to the desire to bridge the gap between development and operations that is inspired by agile techniques, and that is driven by the need to continuously operate and upgrade Cloud services. The DevOps movement is having a profound impact in terms of the tools and techniques that are used in the engine rooms of Clouds, leading to order of magnitude changes in the ability to perform hot system upgrades.
Conclusion: The maturity of information management practices in an organisation has a direct effect on the ability to achieve business goals related to supply chain optimisation, the quality of financial decisions, productivity, and quality of service. The exponential growth of unstructured information is no replacement for structured information. Quite the opposite: a stream of unstructured Big Data can only be turned into tangible value once it is channelled through a distillery that extracts highly structured information accessible to human decision makers, and that can be used to provide a service to the public or to drive a commercial business model. The transformation of unstructured data into knowledge and actionable insights involves several stages of distillation, the quality of which determine the overall performance of the organisation.
Some standards are undeniably useful, and the benefits of these standards can typically be quantified in terms of improvements in quality and productivity due to increases in the level of automation and interoperability. In contrast, other standards mainly fuel a certification industry that has developed around a standards body, without leading to any measurable benefits, whilst clearly adding to the operating costs of those organisations that choose to adopt such standards.
Conclusion: Increasingly, organisations are recognising that they can benefit from a so-called software product line approach. The transition from an IT organisation that operates entirely in project delivery mode to a product development organisation that introduces a product line governance process is a significant undertaking. The process involves the designers of business information services as well as Enterprise Architects and other domain experts. Achieving the benefits of a product line approach (systematic reuse of shared assets) requires the adoption of a dedicated product line engineering methodology to guide product management, design, development, and operations, and it also requires knowing where to draw the boundary between product development and the delivery of professional services.
A framework for information management maturity assessments: is the organisation ready for Big Data?
Conclusion: Effective data science requires a cross-disciplinary team of highly skilled experts, as well as data in sufficient quantity and quality. These requirements imply a level of maturity in information management that is beyond the capability of most organisations today. An information management maturity assessment can help determine whether an organisation is ready to embark on a big data initiative, and to identify any concrete deficits that need to be addressed.
Conclusion: There are many links between the story of data warehousing and the story of SAP adoption, going all the way back to 1997, when SAP started developing a “Reporting Server”. Over the following decade SAP firmed up its dominant position as a provider of Enterprise Resource Planning functionality, creating countless business intelligence initiatives in the wake of SAP ERP implementation projects. Up to 80% of data warehouses have become white elephants, some completely abandoned, and others have been subjected to one or more resuscitation attempts. Big data can either be the last nail in the coffin, or it can be the vaccine that turns the colour of the data warehousing elephant into a healthy grey.
Conclusion: Direct dependencies between services represent one of the biggest mistakes in the adoption of a service oriented architecture. An event driven approach to service design and service orchestration is essential for increasing agility, for achieving reuse and scalability, and for simplifying application deployment. Complex Event Processing offers a gateway to simplicity in the orchestration of non-trivial service supply chains.
Conclusion: Big data not only refers to the growing amounts of netizen-generated online data, it also refers to customer expectations related to the data services provided by corporations and government departments. Increasingly corporate and individual service users expect not only a basic service, but also access to advanced tooling for data transformation, representation, and integration into other systems. In the future, the level of maturity and professionalism of an organisation will increasingly be determined by data-related quality of service characteristics. It is time for organisations to grow-up, and to treat information services as a core product line.
Conclusion: When conceiving and designing new services, the primary focus of product managers and technologists is often on functionality, and adequate quality of service is largely assumed as a given. Similarly, from the perspective of a potential user of a new service – the user is mainly concerned about the functional fit of the service, and is prone to making implicit assumptions about quality of service based on brief experimental use of a service. The best service level agreements not only quantify quality of service, they also provide strong incentives for services provider and service users to cooperate and collaborate on continuous improvement.
Conclusion: Increasingly, organisations are looking beyond classical agile methodologies, towards lean techniques pioneered in industrial production. The transposition of lean techniques into the context of corporate IT is a challenge that requires a high level of process maturity and organisational discipline. The desired benefits only materialise if the lean approach is applied to processes that can be put under statistical control, and if the approach feeds into a domain engineering process that addresses the root causes of operational inefficiencies.
Conclusion: All organisations are multilingual, and most, more so than may seem apparent on the surface. A systematic effort to minimise the likelihood and impact of communication problems can lead to significant cost savings, productivity improvements, and improvement of staff morale. Data quality, the quality of system integration, and the quality of product or system specifications often turn out to be the Achilles’ heel. It is a mistake to assume that the biggest potential for misunderstandings is confined to the communication between business units and the internal IT department. Whilst some IT departments could certainly benefit from learning to speak the language used by the rest of the business, the same conclusion applies to all other business units.
Circa 1960: The “Hard theory of platforms”
In the early days of information technology, hardware was THE platform. Companies such as IBM and DEC provided the big iron. Business software was THE application. In those days even software was as hard as stone. The term application platform was unheard of.
Conclusion: Pattern-based and repeatable processes, such as gathering operational data, validating data, and assessing data quality, offer potential for automation. The Web and software-as-a-service technologies offer powerful tools that facilitate automation beyond the simple mechanical pumping of data from one system to the next. Operational management tasks that focus on administration and control can and should be automated, so that managers have time to think about the organisation as a system, and can focus on continuous improvement.
Conclusion: The Australian Institute of Management recognises that leadership and management will need to continue to evolve to keep up with technological innovation and globalisation. Whilst organisations are usually aware of the need to keep up with technological changes, they often struggle with the practical implications for management and impact on organisational structure. On the one hand operational management can increasingly be automated, and on the other hand the ability to build and lead high performance teams is gaining in importance. Having appropriate people in executive team leadership positions is critical.
Conclusion: Over the last decade, the volume of data that governments and private corporations collect from citizens has been eclipsed by the data produced by individuals, as photos, videos, and messages on online social platforms, and also the data produced by large scale networks of sensors that monitor traffic, weather, and industrial systems. Web users are increasingly recognising the risks of handing over data-mining rights to a very small group of organisations, whist getting very little in return. The pressure is on to develop robust solutions that not only deliver value, but also address concerns about data ownership, privacy, and the threat of data theft and abuse.
Conclusion: Does every organisation need a dedicated ECM system? Not necessarily. Given the breadth of the topic, it is common to use a combination of different systems to adequately address enterprise wide management of content. When embarking on an ECM initiative, it is important to set clear priorities, and to explicitly define the limits of scope, otherwise the solution that is developed may primarily be a costly distraction.
Conclusion: Educating executives in the essentials of information management and related technology trends is an ongoing challenge. CEOs and board members are being bombarded with simplistic marketing messages from the big global IT solution vendors, as well as the messages from the most prominent local IT service providers. The same vendors usually target CIOs and senior IT managers with a bewildering set of new, “must-have” technologies every year. To avoid spending millions of IT dollars on dead ducks, vendor claims must be deconstructed into measurable aspects of product or service quality.
Conclusion: The discipline of Enterprise Architecture has evolved from the need to articulate and maintain a big picture overview of how an organisation works, covering organisational structure, processes, and systems. Whilst Enterprise Architecture can assist in implementing industry best practices, several-fold improvements in productivity and quality are only possible if the organisation makes a conscious effort to attract and retain top-level subject matter experts, and if it commits to a so-called Domain Engineering / Software Product Line approach to the strategic analysis of market needs and the design of products and services.
Conclusion: Lock-in is often discussed in relation to external suppliers of products and services. In doing so it is easy to overlook the lock-in relating to internal tacit knowledge and in-house custom software. The opposite of lock-in is not “no lock-in”, it is lock-in to an alternative set of behaviour and structures. Even though organisations can sometimes suffer from an excessive degree of external lock-in, organisations also benefit from lock-in, in the form of reduced costs and risk exposure. The art of lock-in involves continuously monitoring the business environment, and knowing when to switch from external to internal lock-in and vice versa.
Conclusion: Lock-in to software technology always goes hand in hand with lock-in to knowledge. When using Commercial Off-The-Shelf (COTS) software, most of the lock-in relates to elements external to the organisation. In contrast, the use and development of open source software encourages development of tacit knowledge that extends into the public domain. It is time to move beyond the passive consumption of open source software, to remove business-risk inducing restrictions on the flow of knowledge, and to start actively supporting the development of open source software.
Conclusion: To date vendors such as Microsoft and Apple have been able to exploit operating systems as an effective mechanism for creating locked-in technology ecosystems, but the emergence of the HTML5 standard and Google Chrome sees the value of such ecosystems tending towards zero.
Providers of Cloud Computing services are united by the goal of minimising the relevance of in-house IT, from hardware right up to operating systems and higher-level infrastructure software. Enterprise application vendors such as SAP1 and Salesforce.com are pulling in the same direction. To avoid sunk IT costs and a dangerous level of technology lock-in, any further developments of in-house architectures and applications that ignore this trend should be re-examined.