Frequently Asked Questions

Frequently asked questions with regard to our MicroMetriks

Frequently Asked Questions on MicroMetriks

If you have additional questions dont hesitate to contact the team

How reliable is the detailed sector data used by our system, isn't it simply rubbish in, rubbish out?

Although widespread scepticism clouds government statistics, we would claim that there are statistics and statistics!

Much of the cynicism surrounding the reliability of Government data stems from the frequent and quite sizeable revisions made to National Accounts, most notably the quarterly GDP and balance of payments releases.

We would remind readers that National Account figures are quite different from the detailed sector level data on which our work focuses. Series such as the measure of national GDP are based on estimates derived from consumption, income and production sources. By contrast, the sector level data which we use – for instance product by product pricing – derive directly from statistically reliably samples of manufacturers in each segment. Moreover, whilst National Account series are produced at best quarterly, our series are released monthly.

Indeed, for their producer price survey the Office for National Statistics stress that pricing should reflect that for the struck deal not the list charge. Although consumer prices themselves reflect ticket prices, it is possible to construct price deflators to reflect Buy-One-Get-One-Free – BOGOF – discounts, by using value and volume measures (see our section on the sourcing of the data, found elsewhere on this site).

We should moreover remind ourselves that the sector level data is aggregated to create headline measures which are used extensively by the monetary authorities to set interest rates. Quite simply if the granular sector data is thought compromised the implications are considerable.

Where does the detailed sector data actually originate? Can you tell me who contributes?

As already pointed out in answer to Q1 the data is survey based, and as such reflects actual information collected from private and publicly listed companies, both owned locally and overseas. In an attempt to avoid disclosing company sensitive information, names of actual contributors are not released (see Q6).

Whilst there is no formal list of contributors we can make an educated guess as to who might be part of the sample. In our company-to-sector mapping within the website we present a list of companies with operations in each of the detailed sectors. Remember we are not suggesting that the companies listed contribute to the data, but simply that the market conditions prevailing within each market should be considered relevant.

How often are revisions made to the sector level data?

Revisions are not a concern at the detailed sector level.

As already noted in answer to Q1 revisions are clearly the bane of any user of Government statistics. Indeed, for the much analysed GDP series revisions show a systematic bias, with four out of five tending to be upwards. Moreover, as well as this implicit bias to revisions, we find that they are often retrospective, occasionally running back to a number of years of data. Taken together the bias and lags to revisions makes National Account series notoriously troublesome for policy purposes.

Now whilst revisions do occur for our detailed sector level data, they are largely centred on the number released one month earlier. Indeed, it is important to understand why revisions happen for a survey. As already mentioned detailed sector level data is collected on a survey basis. Even if a respondent to the survey is late in delivering a particular month’s data the overall figure for the product is calculated nonetheless using those respondents who have made the deadline, thus producing a preliminary observation. In the following month’s release the data from late respondents is included, leading to a revision to the preliminary figure.

Crucially, revisions based on lateness of response show no systematic bias, that is to say, on average, revisions are neither likely to be positive or negative. Using a moving average – in our case one of three months – helps to smooth what revisions do occur.

How sizeable are revisions to the sector data?

It can be shown quite clear that revisions at the sector level tend to be comparatively small.

Moreover, it can further be proved that revisions to detail sector data tend to prove statistically random, showing neither a tendency to be raises or lowered systematically (see Q3). This leads us to what may seem a paradoxical claim that revisions at the detailed sector level tend to be proportionately smaller than those for more aggregated series. As one aggregates sectors so we create more frequent and more significant revisions.

On balance then users should not confuse the detailed sector data collected and disseminated by the ONS with its National Accounts releases.

Do some countries have better detailed sector level data than others?

Sure, data quality differs markedly across countries.

However, we would argue that data for the countries we cover is of the highest provenance. Indeed, the UK has data which has improved markedly over the years, doing so from an already high base. Japan and the US both have impressive data pedigrees of their own. Moreover, the introduction of Eurostat has lifted the quality of Continental European data, forcing the sourcing of Italian and Spanish data towards the best practice for German data collection.

What of disclosure, could we find that detailed sector level data is suddenly withdrawn?

The suppression of a particular sector’s data is always possible, but such instances are rare. There are available at present over 300 fully ‘live’ sectors for the UK.

Whilst the detailed economic sector data is drawn directly from operation divisions it rarely suffers from disclosure concerns (see Q2). More specifically, when released the data is a sector wide average and can therefore rarely be considered to reflect a single company’s performance.

Admittedly, if a concern does exists – invariably voiced by one of the companies submitting the information – that the publication of micro-economic data might be disclosive, then the authorities, whilst continuing to collect the data, may agree to suppress its issue. Importantly, as already noted such instances are rare. In the UK an example includes the release of pricing and volume data for the production of industrial gases. However, some reverse engineering can actually produce this information anyway!

Can we claim that the signals emerging from the data are predictive of revisions to corporate earnings?

Whilst not perfect there is a correlation between our signals and announced earnings, certainly at the sector level and most strongly for negative surprises.

Our work might be represented as part of the second generation of anticipatory earning-revision systems. In the first generation of these anticipatory models the catalysts’ were company specific signals (for instance those detected fromdedirectors’ dealing). In second generation models the signals now largely derive from timely micro-economic sector data. To help identify sectors where revisions are most likely we compare the Rate Of Growth Expected in Revenues – ROGER – with actual performance. The idea is that most forecasting is autoregressive of order one and as such misses second order inflexions. This is covered more fully elsewhere on the site.

Now, there are frequent instances when the signals emerging from the detailed sector level data fail to match the aggregated earnings figures reported by a cohort of companies operating within the specified sector. This seeming inconsistency can be explained by the difference between economic and accounting profitability. Our methodology takes no account of tax or interest costs, depreciation or amortisation. In fact it could be argued that our process leads to a cleaner measure of how earnings evolve with time (this is covered more fully in an accompanying section on this site).

Is there any evidence that the signals emerging from the detailed sector data improve ones performance in stock picking, or long-short sector strategy?

Given the multitude of factors which impact performance this is practically anuntestable hypothesis!

Clearly, the performances of stocks and sectors are driven by a host of factors, of which earnings momentum is but one. Moreover, investment is not a laboratory science, with regular unanticipated shocks influencing stock and sector performances. Such shocks include corporate action as well as geo-political events.

Nevertheless, where the detailed sector data has a proven performance tradition has been ‘on the short-side’. Specifically the data is best suited to anticipating downward earnings revisions or indeed outright profit warnings (see Q7). Furthermore, the data’s efficacy in ‘predicting’ earnings warnings has been best for ‘single-cell’ companies, i.e. those whose activity is confined largely to one sector within one economy.

Do certain sectors emerge more than others in terms of the system's usefulness?

Yes. There is no denying that the system is stronger in some areas over others.

Clearly, the system relies on collecting monthly data which reflects movements in pricing, volume and costs. Since such data is currently largely confined to industrial, construction and retail sectors, it is in these areas that we major. However, efforts have been under way for some time to extend coverage, to distribution and service sectors a process of enlargement which continues and should see a significant increase in sectors under observation.

If the sector level data is so useful why isn't it more widely exploited, say by company management, or by industry analysts?

We would suggest that the general lack of market interest in the detailed sector data reflect less on the data itself than the corporate and investment research communities! (see Q20).

We would argue that the sector level data has been overlooked for a number of reasons, not least the considerable investment of time required to marshal the information. It should be remembered that for each of the 300 or so two digit sectors each has its own price and volume series, along with a number of cost components (see Q20).

What if a management for a company operating within a particular sector contests the data's conclusions, does this disqualify the data?

Instances will always exist where a company’s management will claim that the data for a sector in which it operates does not resemble its own conditions. The reason for what might seem such as inconsistency is that each sector’s data is an average of all its respondents. It is quite possible therefore that individual company’s may be seeing somewhat different conditions from the market average.

Now there is no denying that a company’s management is more than familiar with the strength or weakness in its own pricing, costs and demand. However, less clear might be how its pricing, costs and sales compare with industry-wide averages. Indeed, where a company’s pricing varies periodically under contract we would suggest that it might have poor knowledge of the most recently struck industry. Such divergences could then lead to a management contending the data was inaccurate. Nevertheless, we would argue that a knowledge of the most recently struck industry pricing should help forewarn – and thus forearm – its management at contract renewal (see our section on the use of the data in the boardroom, found elsewhere on this site).

Should a company's management care much about data other than for its own sector or sectors of activity?

Yes it should, with its focus as much on its customer and supplier markets as its own.

Consider the usefulness to a particular company’s management of it being aware of how prices being charged by its customers are collectively moving, particularly ahead of it in turn setting its pricing. Moreover, management can also monitor the pricing of its suppliers, prices which, it has to be remembered, are its own costs. If supplier prices are set according to some form of fixed-term contract, a knowledge of recent trends in such pricing will help forewarn a company’s management, and possibly help avoid nasty surprises at contract renewal!

More generally we would maintain that a knowledge of how pricing and demand is performing along the length of any particular supply-chain should be useful for all the participants along its length. This is the Verbund analysis which is developed more fully elsewhere, but which we summarise next.

The analogy we tend to use for our Verbund work relies on thinking of a motorist and his reliance as much on his rear-view and side mirrors as observations made through his windscreen.

As a motorist requires near 360o visibility so too should a company’s management. Firstly, it should be aware of the market conditions for its own sector – the view through its notional wing mirrors. Secondly, management should have a knowledge of the market conditions being experienced by its customers – seen through the windscreen. Finally, it should be appraised how its suppliers are performing – as scene through our metaphorical rear view mirror. To be forewarned is to be forearmed.

How can detailed sector data hope to prove useful for companies whose activities range across industries and geographies?

There is no denying that companies whose operations are contained within a single sector in one country are best suited to the detailed sector data we use.

However, this is no to exclude the data’s usefulness for companies spread more widely by sector and geographies. By decomposing each company into its main areas of activity one is able to inspect each sector in turn.

As a final note, for the UK at lease old style conglomerates began to disappear some time ago.

Whilst its all well and good seeing how a sector has performed the data tells us nothing about the future, which is what really matters!

True the data offers a historic view of a sector’s performance and, at best, a real time assessment. Nevertheless, such detailed sector data could help inform a company’s management on how its market sector performed in the past under specific macro-economic conditions.

For example, one could use detailed data to see how a particular sector performed when the currency was strengthening, or when interest rates were changing swiftly. Not unreasonably assuming that the past can inform the future, such a historical assessment could prove invaluable entering fast moving economic waters.

Our argument is that cyclical gyrations witnessed in the past could easily act as reliable case studies for “what-if?” type forecasts. Quite simply, a specific micro sector’s past performance through a particular macro-economic climate could be considered a reliable guide to its most likely prospects under similar conditions. We repeat, to be forewarned should be to be forearmed!

Could respondents corrupt their data entries?

Sure, each respondent is free to fill in what he darn pleases.

However, as noted in answer to Q1 sample sizes are designed to be sufficiently large to avoid being compromised by isolated maverick answers. Indeed, unless it can be shown that respondents systematically collude to misrepresent their sector’s performance – all joining to over or understate prices for instance – then we would argue that this concern is misplaced. Moreover, the Government agencies employ experience statistician to screen for signs of bias or systematic outliers.

What of surcharges and subsidies do these not make a nonsense of the detailed price data?

Not really. As noted in Q2 price data should reflect actual charges rather than list prices.

With so many vendors of data now available alongside the ONS, practically competing for attention – including the CBI, the Chamber of Commerce and NTC Research – how do we avoid being confused in the mass of this information?

Information overload should be far from being a concern. Indeed, close inspection of how ONS data compares with that released by other vendors shows a remarkably close correlation. Nevertheless, what makes the ONS stand out as from other sources is the size of its sample and the depth of its sector detail. Moreover, the ONS is committed to broadening coverage to more sectors not currently covered by monthly data releases.

What exactly is all this chain-linking, hedonic pricing, re-weighting and change in base year which goes on?

Cleary as the relative sizes of sectors alters with time, and as products evolve, so changes need to be made to the structure of the datasets.

However, such changes tend to be isolated to certain sectors (hedonic pricing for instance is confined largely to new technology products) and at more aggregated sector level (as component sectors have their weights altered).

Turning to changes to base years it should be remembered that most series are indices and as such choice of base level and year is unimportant. Indeed, the process of rebasing in no way changes the dynamic of each series.

What is the point of collecting sector data for developed Western economies when all the real action is happening in Asia particularly China?

China is definitely the 800lb gorilla in the global economy.

Notwithstanding the pace of China’s growth it still ranks below the US, Japan and EU in the size of its economy.

Whilst data for China is probably years away for having the provenance required for it to be useful, the figures emerging for Japan and South Korea are themselves highly reliable. Moreover, data from other Asian Tigers is improving.

In short, taken together, Asian data does exist and offers a proxy, albeit imperfect, for the all important Chinese market.

Since industry analysts – particularly those employed by large investment banks – have regular and high level contact with the management's of the companies they follow, aren't they the best source of detailed sector knowledge?

Investment analysts play a crucial role in researching sector performance. Sadly however, despite the regulatory changes of recent years, conflicts persist.

In addition to their implicit conflicts, investment analysts suffer other shortcomings.

Firstly, they cannot claim to enjoy uninterrupted access to data, with closed periods breaking their communication link with management, certainly when it comes to retrieving ‘market sensitive’ information.

Secondly, they also suffer what we would claim is a ‘wood for the trees’ problem. Their focus on the specifics of each company within their sector of interest, often sees them miss industry-wide developments (see Q10).

Thirdly, investment analysts target accounting profitability not necessarily the economic profitability which is our focus (see Q7).

In short, at some point investment analysts and corporate consultants will have to awaken to the usefulness of the detailed sector data on which we focus, and which they have largely ignored.

Suppose we accept the merits of using high frequency detailed economic information. In this case we have a concern. Since the data used in Quantmetriks is monthly, it is inferior to the weekly and sometimes daily information tracking prices for commodities such as various paper grades, metals and food stuffs. Why is this not used more extensively within the system?

The structure of the Quantmetriks systems relies on collecting a price, cost and volume series for each sector covered. At present the highest frequency for this data happens to be monthly.

Whilst we would welcome weekly, or indeed daily economic information, we have to accept that what data is available at these frequencies is sadly incomplete. Consider the paper sector (SIC 21). Data is available weekly for various paper grades. So too is data on the price of pulp – the main ingredient in paper production. However, whilst such pricing information is available, data for output is not at this frequency.

Have a question that needs answering? Speak to us

Contact us
Contact us