Novelty indices are guided by the notion that creativity is no creatio ex nihilo, but rather a cumulative process that manifests in atypical combinations of prior knowledge.
They were of the opinion that the dichotomy of competence-destroying or competence-enhancing technologies lacked nuance.
The article made waves in and beyond the science system and prompted a public debate surrounding the question of if and why science is running out of steam in spite of the massive expansion of the (global) science system in recent decades.
They became iconic examples of the rise of the Sharing Economy, a term that gained traction following the 2008 global financial crisis.
We also discuss some of the potential reasons behind the rise of unicorn companies in the aftermath of the global financial crisis.
Against this backdrop there is now a growing view that the EU should take action to help promising start-ups to achieve their full potential.
In a winner takes all logic, it is imperative for a start-up to grow as quickly as possible to occupy market space before an incumbent firm can introduce a competitive product.
Not astonishingly, Europe's first unicorn to turn into a decacorn was Klarna, a Swedish financial service provider, established in 2005.
Unknowable risk covers unexpected events that describe a deviation from status quo.
Using one's gut feeling proved to be a valuable strategy to deal with extreme uncertainty.
However, current research on these evolved entrepreneurial species is nascent and usually lumped into venture capital conversation or media articles.
The linear model had overlooked the influence of technology on setting the scientific agenda, as demonstrated by the early decades of the industrial revolution when technological breakthroughs were only followed much later by their respective scientific explanations (Verbeek et al., 2002).
The use of the linear approach largely ignored (1) the empirical evidence that technological change often results from experience and ingenuity rather than from scientific theory or method, (2)the instrumental role of technological developments in inducing scientific explanation, and (3) the importance of technology-based instrumentation for scientific investigation (Verbeek et al., 2002).
The traditional understanding of the contribution of basic research to industrial innovation, was critically and empirically scrutinized for the first time in the late 1960s.
Since a couple of decades, it has become obvious that the linear model of the knowledge creation, transfer and diffusion, “value chain”, is not longer valid since it does not capture the current complexity and multiplexity of the relationships involved along the chain (Verbeek et al., 2002).
The present constraints on public expenditures in general, the enormous investments involved in sustaining the econo-techno-scientific complex, and the actual debate on the effectiveness of government supported scientific research, all augment the need for more accountability and effectiveness in the area of publicly funded research. Therefore, at a policy level, disentagling S&T interleaving is assumed to lead to considerable insights as to how to handle these challenges (Verbeek et al., 2002).
New indicators of science linkage in Japan show increasing trend over these 10 years, while a traditional indicator (NPL citation) is stagnated during this period.
The impact of science on technology is bound to be of central concern to science policy-makers.
Too much attention has been devoted to the relatively narrow range of scientific fields producing knowledge with direct technological applications, and too little to the much broader range of fields, the skills of which contribute to most technologies.
food for thought
We need to sample all the three types of latent varialbes $\vec{z}, \vec{\varphi}$ and $\vec{\vartheta}$. However, with the technique of collapsed Gibbs sampling, $\vec{\varphi}$ and $\vec{\vartheta}$ can be integrated out due to the conjugate prior $\vec{\alpha}$ and $\vec{\beta}$.
How does our proposed mode stack up against the other state-of-the-art models?
“We're not saying that bringing money is not important,” he says, but industry sponsorship did not provide an edge, maybe because academics in the samples get enough funding from other sources.
In 2014, the latest year available, it was US$3.6 billion, or 5.7% of total funding, on par with investment from non-profits, and state and local government.
There is widespread agreement that collaborations between industry and academia are good for the businesses involved, and generally have a positive effect on the economy.
This differentiation is crucial for prioritizing research efforts, identifying emerging areas of interest, and making informed decisions in policy-making, technology transfer, and investment.
New issues of major journals are included at the latest within 3 months after publication and regularly even sooner.
The recent developments towards more systemic conceptualizations of innovation dynamics and related policies highlight the need for indicators that mirror the dynamics involved.
In just 5 days the number of users of this technology reached 10 lakhs (hundreds of thousands).
DANN performs much better in this setting showing that it classifies correctly more high weighted FoS labels. Whereas our methods perform mostly similar in both metrics suggesting that presumably SciNoBo overall generalizes better but performs poorly in some high-weighted FoS labels.
An extremely competitive business environment requires every company to monitor its competitors and anticipate future opportunities and risks, creating a dire need for competitive intelligence.
The top 50 prolific countries regarding the productivity of coronavirus research during the study period are selected as the sampled countries.
The pandemic stimulated the innovativeness of many companies that repurposed their slack and created product innovation.
Climate change is expected to have far-reaching effects on the electricity grid that could cost billions and could affect every aspect of the grid.
Over the 18-year period under scrutiny, academic patents are found to account for more than 11% of all patents invented in the country.
It is only of late that users of inventor data have started discussing openly about the disambiguation techniques they employ.
Other resources may be exchanged by co-inventors, either substantial or symbolic.
The evidence is mixed and does not lead itself to easy interpretation.
The effectiveness of collaborative ties as conduits of knowledge decays over time.
The presence of such references my be taken as indication of the knowledge indebtedness of the invention to the cited research.
We chose three technological fields, i.e. lasers, semiconductors and biotechnology, which are characterized by a strong reliance on scientific developments and, therefore (at least potentially) involve high levels of interaction among the individuals involved in science and those involved in industrial research.
In the wake of the Bayh-Dole Act, several US scholars have attempted to estimate the volume of academic patenting showing its impressive growth.
A related line of enquiry investigates the patterns of scientific paper co-authorship among academic and corporate scientists, and provides evidence of increasing levels of collaboration across organisational boundaries.
Notwithstanding recent advances in the measurement of science-technology linkages, the extent of the connectivity between the communities of scientists and technologists has not been explored in detail.
Many governments around the world are looking for ways to encourage technology transfer from university to industry, through measures and instruments aimed at supporting academic scientists to assume more entrepreneurial attitudes, particularly through the enforcement of intellectual property rights over their discoveries.
Nano Science again takes the lead.
Its usage requires a less pronounced statistics background.
Since topic-vector representations are ultimately probability mass functions over the topics, all the components in $\vec{x}_i$ lie in the interval [0, 1], and altogether add up to 1.
We wish to emphasize that expert opinion is necessary for contextualizing and describing the seminal works, but RPYS can expedite the process of finding milestones in the literature.
Despite the relationship between scientific discovery and biomedical advancement, identifying these research milestones that truly impact biomedical innovation can be difficult and is largely based solely on the opinions of subject matter experts.
As of June 202, more than 6.5 million COVID-19 cases have been reported worldwide resulting in more than 500,000 deaths, as of this writing, with numbers increasing daily.
Since our own expertise on the topic of peer review is limited, we refrain from providing an interpretation of the results.
In contrast to citations, patent classification does reflect the subject matter of the patent.
It is noteworthy that these scholars have not claimed that novelty is the sole driver of economic value nor that recombination only serves to generate novel ideas and has no direct effects on the degree of economic value created.
As an aside, there seems to be no published standard protocol for expert validation of clustering solutions.
Evaluation results clearly and consistently demonstrate the effectiveness and superiority of the new model with respect to several state-of-the-art peer models.
While the pragmatic reason to remove stopwords is to reduce the computational cost of the topic model inference, the most exciting potential of stopword removal in topic modelling is an improvement in the quality of the inferred topics.
While both calculations appear to show a significant overlap between our approach and TF-IDF, a subtler analysis reveals large differences.
For concreteness, we consider topic modelling, a paradigmatic unsupervised approach for automatic organization of collections of documents.
The long time to digest a scientific paper posts great challenges on the number of papers people can read, which impedes a quick grasp of major activities in new research areas especially for intelligence analysts and novice researchers.
Since our own expertise on the topic of peer review is limited, we refrain from providing an interpretation of the results.
Of note is the power-law like distribution of label frequencies for each corpus.
This case was selected because we are well acquainted with the topic.
In reviewed philosophy and complexity literature, it can be asserted that all of these aspects were stated with different meanings or may be with a well-articulated definition.
Amid these diverse views of the interplay between scientific and technological progress, there are many anecdotes but little systematic evidence.
Most FTA endeavors now purport to inform policy processes for those addressing Science, Technology & Innovation (ST&I) and the management of technology (MOT).
A case in point would be Topic 2: clustered with biological research.
By using topic modeling, we work around the challenges of human-given labeling and enable an unsupervised method to draw out latent topics based on the semantic text, excluding the meta-information embedded in each publication.
Novel text-mining methods create value by being able to create practical categories from semantic text, rather than using preordained categories, keywords, or citations.
Frequently used methods include human-assigned subject categories, analyzing citation chains or genealogies, and co-word analysis of keywords.
Integrating this system are multiscale social networks that are ripe with structural, social, economic, and behavioral complexity.
In the next step we run a set of econometric analysis controlling for scientific field effects and other potential confounders as described supra.
This holds first and foremost for the quality of the scientific publication, as the prior literature has identified that more highly cited publications are more likely to be referenced in technological inventions.
Scientific breakthroughs often require novel approaches, which at the same time, however, also face a higher level of uncertainty and potential resistance by incumbent paradigms.
Journal-level analysis is well-positioned to distinguish domains of knowledge while having precedence in the literature for being relatively transparent, interpretable, and computationally feasible.
According to each journal's subject area, the ISI currently defines three fields and constituent subfields.
Results show that inventions identified as technologically novel, are significantly overrepresented among a set of award-winning inventions, and significantly underrepresented among inventions that were refused a patent because of lack of novelty.
Because of their distinct profile, unpacking the drivers and effects of radical innovations is of major interest to scholars studying the economics and management of innovation.
To extract the collection of SAO structures from the DWPI enhanced abstract, we developed a bespoke program based on Stanford Parser.
These bibliometric indicators imply that China's research is addressing important problems, advancing knowledge, and making researchers take note of that.
Most tellingly, while China's citation rate lags behind that of the US, the gap seems to be moderate in recent years.
The project aims to develop a methodological framework and associated tools for analyzing NESTs to help policy makers and R&D managers to make bettter-informed decisions regarding innovation pathways.
The actual value ratio between full authorship and contributorship is something that can be left to the market to determine.
We provide relatively low-cost solutions that occupy a middle ground in these debates and innovate in two ways.
Elsewhere, we demonstrate its usefulness for analyzing other sources of text of interest across political science.
Criteria for authorship have been discussion at length, because of the inflationary increase in the number of authors on papers submitted to biomedical journals and the practice of “gift” authorship, but a simple way to determine credit associated with the sequence of authors' names is still missing.
The other extreme case is when all papers pertaining to the topic of $p_0$ are joint publications between $a_1$ and $a_2$.
It is, however, common knowledge and well-documented that the ICMJE recommendations are seldom followed completely and often even disregarded.
Fractional counting is based on no other information than the number of authors.
This manuscript will be laid out in the following manner.
While many important and interesting problems can be examined without individual level data, a great many other require such data to get to the real heart of the matter.
This time-window meshes together three requirements: (i) articles should not be too recent, so that they have accumulated a certain amount of citations, (ii) articles should not be too old, so that our analysi can bring out the current error propensity of databases, and (iii) the overall dataset should be relatively large, for the results to be statistically robust.
It is not without reasons that MPA is widely used by academia in recent years.
The former focuses on methodology, and the latter dwells upon applications.
The last few years have seen a growing interest in main path analysis among scholars across a wide spectrum of disciplines.
The investigation of the dynamics of national disciplinary profiles is at the forefront in quantitative investigation of science.
Section 2 discusses the methods for network creation and the concept of patent family as a unit of analysis; here, we discuss the preference of citation networks over other methodologies.
Looking across all document-pivoted model results, one can see a clear distinction between the relative performance of LDA and SVMs on the pow law datasets vs. the non power-law datasets.
Although these models are related to varying degrees to the dependency-LDA, as unsupervised models they are not directly applicable to document classification.
Due to the mismatch between the generative description of L-LDA and how it is employed in practice, we find it pedagogically useful to distinguish between the models presented here and L-LDA.
The methodology used to arrive at these results is presented, but before delving into the specifics, discussion of the literature and theory pertinent to this indicator is in order.
Direct citation (DC) was a distant third among the three.
Methods and processes for delineating topography go by multiple names including partitioning, clustering, topic detection, community detection, etc.
By expanding the quadratics, collecting powers of $\mu$ together, and then completing the square, it is straightforward to show that (3.5) has the form of another normal density.
Each token with a standard contraction was then separated into a root and a contraction (e.g., don't – do not). Contractions were then removed since all such suffixes are forms of words found on standard stopword lists or are possessive f