Data management – Web Yantram http://webyantram.com/ Tue, 20 Sep 2022 22:27:17 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 https://webyantram.com/wp-content/uploads/2021/06/cropped-icon-32x32.png Data management – Web Yantram http://webyantram.com/ 32 32 Kitsumon launches NFT farming gameplay https://webyantram.com/kitsumon-launches-nft-farming-gameplay/ Tue, 20 Sep 2022 21:25:57 +0000 https://webyantram.com/kitsumon-launches-nft-farming-gameplay/ Share this article Kitsumon has announced the launch of its breeding mainnet, showing the significant progress made by the game project since its testnet announcement on June 14, 2022, and hundreds of users also testing this aspect of gameplay during this period. , which is now available . Breeding hybrid kitsus The breeding mainnet allows […]]]>

Share this article

Kitsumon has announced the launch of its breeding mainnet, showing the significant progress made by the game project since its testnet announcement on June 14, 2022, and hundreds of users also testing this aspect of gameplay during this period. , which is now available .

Breeding hybrid kitsus

The breeding mainnet allows players to create “Hybrid Kitsus”, which are the combination of two NFTs of Kitsu creatures. The offspring created inherit genetic elements through digital DNA technology, providing over 17 trillion possible outcomes.

Players can get their hands on Egg NFTs through Kitsumon’s in-app marketplace and hatch those eggs into Kitsu NFTs through the KitsuDex. If players have infinite potions, they can then use those consumables to create a hybrid Kitsu by breeding, which can be purchased through the dedicated potions page.

The creation of Kitsus also encourages battles within the eventual MOBA, as well as player engagement and revenue generation for users. Breeding also promotes “bloodline royalties”, which is a royalty reward system built into the DNA of all Kitsus by their creators. When a Kitsu is created, it is imprinted with up to three creator addresses and formulates a micro-ecosystem with income capabilities.

Players can also generate their own referral code through the new Player Rewards Dashboard, where they can get bonus rewards for everyone who gets potions through their unique link. Potions are an essential element for breeding to occur.

Kitsumon CEO James Kirkby had this to say:

“It has been a few months of continuous hard work and development by the entire Kitsumon team. We have seen the entire crypto market in a bear cycle and this has been a unique opportunity to refine and take things to a new level in the background. We’re proud of the incredible accomplishments this year so far and now with the addition of a massive gameplay feature, such as farming, the go-live not only fulfills a milestone, but also our continuing promise. our fans and users to continuously develop and bring out exciting gameplay aspects of our game”.

Kitsumon is an NFT game about collecting, raising and maintaining adorable Kitsu pets. From fun professions like farming, fishing, cooking, and an in-depth NFT farming system, to MOBA PvP modes and land acquisition.

Share this article

]]>
A more homogeneous, decentralized and democratized Internet https://webyantram.com/a-more-homogeneous-decentralized-and-democratized-internet/ Mon, 19 Sep 2022 03:53:31 +0000 https://webyantram.com/a-more-homogeneous-decentralized-and-democratized-internet/ A concept currently circulating on the Internet and generating equal parts hype and speculation is Web 3.0. Various pundits have weighed in, trying to define what it is and isn’t – however, what is certain is that Web 3.0 will be a decentralized ecosystem where infrastructure creators will own their data. The days of a […]]]>

A concept currently circulating on the Internet and generating equal parts hype and speculation is Web 3.0. Various pundits have weighed in, trying to define what it is and isn’t – however, what is certain is that Web 3.0 will be a decentralized ecosystem where infrastructure creators will own their data. The days of a server and a database are over, as Web 3.0 will evolve into a seamless way to connect people and devices seamlessly. But, to understand the implications of this transformation, it is important to go back to the ancestors of Web 3.0.

In the beginning, there was Web 1.0, which was above all a means of sharing and using information on static pages. With Web 2.0 came the advent of social media and started the era of user-generated content. What united these previous iterations was that the data was stored on servers owned by large companies and institutions. Even if the user was the original creator of the data, he was ultimately not the true owner. Although Web 3.0 has not fully taken shape, it will have significant implications for the way business is conducted because it is – unlike in the past – decentralized and uncontrolled by governments and corporations.

Centralized vs Decentralized

The internet was first structured around single-server ownership because it was the easiest way to build network infrastructure. Still to this day, the data is, in most cases, stored on a server or a cluster of servers in the cloud. And these servers belong to companies that manage data on behalf of users and even other companies. Web 3.0 will break this paradigm by moving from one server to many different, decentralized servers; as Berners-Lee envisioned, Web 3.0 will create a universal space not governed by a central authority.

Many believe that blockchain will be the “essential” enabler of Web 3.0 when it comes to decentralization. The blockchain will enable encryption and distribution computing, which means that data stored on a blockchain can only be accessed by authorized individuals. Essentially, the owner of the data will be the only one in control, i.e. the user. In addition, blockchain allows people and businesses to interact without intermediaries or third parties. A trustless transaction between two parties is best represented in exchanging Bitcoin directly with another person. However, since the Internet will theoretically no longer be owned by a central custodian, society will need to find a balance between trust and truth.

If blockchain technology plays a central role in the development of Web 3.0, we could see cryptocurrencies seamlessly integrate into the digital payment system. Decentralized ledger technology could also be used to democratize the world of digital content and protect copyrights by giving everyone the ability to create and monetize their content.

Facilitate trust

Some have speculated that Web 3.0 would be a potential solution to the hegemony of “big tech companies,” such as Google, Amazon and Microsoft, which together own more than 50% of all data centers. While someone may argue the importance of centralized data to safeguard the truth and prevent the spread of harmful or inappropriate content, others posit that the data can be easily manipulated because it is so centralized. Currently, in the Web 2.0 model, if a system administrator wanted to tamper with data on a server, they could hypothetically do so, and no one would know anymore. But, with Web 3.0, centralized companies no longer own or control data, as it is managed privately. Additionally, there would be multiple servers, each with copies of the same data but overseen by different people or companies. Likewise, users cannot predict which server they would join even if they keep the same set of data. This unpredictability is necessary because an administrator cannot manipulate the data without making it very obvious because the information would no longer match other servers. And by not having a centralized party generating the dominant opinion, it democratizes the web.

Nevertheless, the implications of Web 3.0 raise some questions about the inherent lack of oversight and control in terms of security and legality; some believe the Internet could become something akin to the “Wild West”. Likewise, despite the excitement of a decentralized internet, many wonder if it will be as free as people think. Big tech is unlikely to allow their monopoly to end so easily – already several large corporations and venture capitalists have started pumping money into Web 3.0.

A major misconception is that the Metaverse and Web 3.0 are the same. In reality, the metaverse is just users interacting with the presentation/interactive layer, whereas Web 3.0 is the whole architecture with each level decentralized.

How Web 3.0 Will Affect Business Models

Web 3.0 will revolutionize the way businesses interact with their customers, so much so that they may need to build an entirely new infrastructure to adapt. New channels will need to be created for potential revenue streams, especially curated content, products, and experiences. Web 3.0 will give businesses direct access to end users, forcing many to rethink what omnichannel really means.


Currently, businesses face significant challenges in customer segmentation, but with Web 3.0, customers can effortlessly and purposefully share their information with the brands of their choice, making the customer experience even more personalized. By adding AI and machine learning to decentralized data structures, brands can do more than just targeted advertising – for healthcare, in particular, a company could design a drug designed solely from data from… ‘a patient.

In many cases, users will collectively contribute to the creation of the product, being fairly compensated for their contribution as co-investors and creators with no central authority authorizing payments. Based on various DAO models and smart contracts, the direct link between end use/revenue and creation contribution can be established without any management party involved.

A realistic point of view

It will take time to understand the full potential of Web 3.0, and there is still a lot of research and development needed before it becomes a reality. And many obstacles stand in the way, such as the reluctance of big tech to give up its dominance. Nevertheless, Web 3.0 will completely change the way businesses and consumers interact online.

]]>
Five Key Analytics Dashboard Best Practices to Consider https://webyantram.com/five-key-analytics-dashboard-best-practices-to-consider/ Fri, 16 Sep 2022 21:08:30 +0000 https://webyantram.com/five-key-analytics-dashboard-best-practices-to-consider/ This is part of Solutions Review’s Premium Content Series, a collection of reviews written by industry experts in maturing software categories. In this submission, 2nd Watch Managing Consultant Rachel Stewart offers key analytics dashboard best practices to consider. So you’ve been tasked with creating an analytics dashboard. It’s tempting to jump straight into development, but […]]]>

This is part of Solutions Review’s Premium Content Series, a collection of reviews written by industry experts in maturing software categories. In this submission, 2nd Watch Managing Consultant Rachel Stewart offers key analytics dashboard best practices to consider.

Premium SR ContentSo you’ve been tasked with creating an analytics dashboard. It’s tempting to jump straight into development, but wait a minute! There are many pitfalls that are easy to fall into that can ruin your plans for an attractive and useful dashboard. Here are five important dashboard development principles to keep in mind whenever you open Power BI, Tableau, Looker, or any other BI tool.

  1. Stay focused and defined

Before you start answering questions, you need to know exactly what you are trying to find out. The starting point for most dashboard projects should be a whiteboard session with end users; the dashboard becomes a collection of visuals capable of answering their questions.

For each visual you create, be sure to answer a specific question. Every graph should be intentional and useful, and it’s very important to have your KPIs clearly defined long before you start building. If you don’t include your stakeholders from the start, you’ll likely have a lot more to rework after the initial production is complete.

  1. A good database is essential

Generating meaningful visualizations is almost impossible without a good database. Impure data means holes and glitches will need to be patched and patched further down the pipeline. Many BI tools have functions that can format/prepare your data and generate some level of relational modeling to create your visualizations. However, too much modeling and logic in the tool itself will cause significant performance issues, and most BI tools are not specifically designed for data management. A well-modeled semantic layer in a separate tool that handles all the necessary business logic is often essential for performance and governance.

Don’t underestimate the semantic layer!

The semantic layer is the preparation stage where business logic is executed, joins are defined, and data is formatted from its raw form so that it is understandable and logical for users in the future. For Power BI users, for example, you’ll likely generate tabular models in SSAS. With a solid semantic layer in place before even accessing the BI tool, there will be little to no data management to do within the tool itself. This means there is less processing for the BI tool to handle and a much cleaner governance system.

In many BI tools, you can load a raw data set and have a working dashboard in 10 minutes. However, building a semantic layer requires you to slow down and spend time defining, developing, and thinking about what the data and insights you are trying to get for your business are. This ensures that you are actually answering the right questions.

This is one of the many strengths of Looker, which is specifically designed to handle the semantic layer and create visualizations. This forces you to define the logic in the tool itself before you start creating visuals.

It’s often tempting to skip data preparation steps in favor of getting a finished product out quickly, but remember: your dashboard is only as good as the data it contains.

  1. PLEASE declutter

There are a lot of obvious issues with the dashboard below, but there’s a lesson to be learned that many developers forget: embrace white space! White space wants to be your friend. As in web development, trying to cram too many visuals into the same dashboard is a recipe for disaster. Edward Tufte calls it the “data to ink ratio” in his book The Visual Display of Quantitative Information, one of the earliest and most important resources on data visualization.

Basically, just delete anything non-essential or move important but irrelevant information to another dashboard/report page.

  1. Think before using this overly complicated visual

Are you about to use a tree diagram to demonstrate the relationships between three variables at once? What about a representation of sales in 3D and on three axes? Most of the time: no. Visualizing data isn’t about creating something flashy, it’s about creating something simple that someone can get a glimpse of at a glance. For almost all complex visualizations, there is an easier solution, like splitting the chart into several more focused charts.

  1. Keep your interface clean, understandable and consistent.

Along with keeping your data clean and your logic well-defined, it’s important to make sure everything is understandable from start to finish and easy for end users to interpret. It starts with simply defining dimensions and measurements in a logical and consistent way, as well as hiding excess and unused columns in the final product. A selection panel with 10 well-named column options is much easier than one with 30, especially if end users will be doing the editing and exploring themselves.

You might notice a theme with most of these dashboard development principles: Slow down and plan. It’s tempting to jump straight into creating visuals, but never underestimate the value of planning and defining your steps first. This will help ensure that your dashboard is clean, consistent, and most importantly, valuable.

Rachel Stewart
Latest posts from Rachel Stewart (see everything)

]]>
How Integrated City Data Delivers Social Services https://webyantram.com/how-integrated-city-data-delivers-social-services/ Wed, 14 Sep 2022 21:18:57 +0000 https://webyantram.com/how-integrated-city-data-delivers-social-services/ Philadelphia’s new Office of Integrated Data for Evidence and Action (IDEA) integrates social services data into city departments so officials can identify vulnerable populations and shape policies and programs to reduce poverty and enhance equity. The city had been working with data for two decades, but IDEA was created in March by executive order. “It […]]]>

Philadelphia’s new Office of Integrated Data for Evidence and Action (IDEA) integrates social services data into city departments so officials can identify vulnerable populations and shape policies and programs to reduce poverty and enhance equity.

The city had been working with data for two decades, but IDEA was created in March by executive order. “It started with case management and then morphed into using big chunks of data to identify vulnerable populations so we could coordinate services for those populations,” Director James Moore said.

Moreover, this is the first time that a data analysis and conversion team has been on board, he said.

Kristen Coe, Director of Research, Analytics and Evaluation, is responsible for recruiting staff to build analytics capacity and coordinate talent from other departments.

“If someone in [Department of Behavioral Health and Intellectual Disability Services], which is our mental health department, wanted to make a request, but they had a question about child protection involvement, we were going to see our colleagues from the child protection department, and we were talking with them the right way to query the data they contribute,” Coe said. “It’s about sitting down and being able to see all the different data available and thinking about it.”

In August, the IDEA office released the Upward Mobility Plan, which aims to lift the city’s poorest residents out of poverty by identifying them and assessing their needs through data integration. The plan highlights four components for achieving equity: data management, performance measurement, community engagement, and coordinated benefit access strategies.

“Strong and effective data collection and management, and the use of person-level data, can enable us to address the challenges our residents face in accessing the program.[s] and services,” the plan says. “Expanding and improving the City’s data management and sharing operations will help us design better and more targeted programming and monitor implementation more closely.

The plan outlines data management initiatives, such as ensuring that data collected is appropriately disaggregated and that all contracted service delivery partners adhere to data collection, sharing and acceptable use agreements. data. It also calls for the creation of a data fairness framework and a multi-departmental public data dashboard, the construction of internal data management systems, the contribution to a data dictionary at scale of the city and the inventory of data collected by the departments.

IDEA in action

An example of the IDEA office in action is how it used data on Philadelphia residents to enroll more families in the child tax credit program after President Joe Biden signed the American Rescue Plan Act. Last year.

In an April article, Philadelphia Mayor Jim Kenney wrote that with ARPA’s expanded tax benefits, 75,000 Philadelphians, including 46,000 children, could be lifted out of poverty. But not all families who needed benefits received them. The IRS estimated that up to 14,000 children in Philadelphia had very low-income parents or guardians and were not receiving child tax payments, Kenney said.

“There’s a layer of people who are the poorest, the lowest income who don’t file tax returns, who weren’t going to get it automatically,” Moore said, so the office of the IDEA has focused its efforts on identifying and reaching out to these families.

The bureau used local social services data that revealed which households were participating in the state’s Medicaid program, homelessness prevention services, or forest and family care. Families participating in these programs fit the description of those who receive the child care tax credit and would therefore likely qualify for the program, said Solomon Leach, communications manager for the Office of Empowerment and Opportunity. communities (CEOs).

To manage the data, which is stored in an Oracle database, the office uses a custom ETL process that copies some of the data from the source agency and sends it to the IDEA system. Then an algorithm matches the new record to one that already exists in IDEA’s system to ensure they belong to the same person, Leach said.

“It’s based on point values, so if dates of birth match, for example, a record is given a number of points for matching on that attribute,” he added.

The next step is to translate the data into information that city officials can evaluate to move forward with the best course of action, Moore said. IDEA provides data users, such as departments or research partners, with information in various formats, such as client-level datasets as well as tabular data for aggregation or analyzes that can then be incorporated in reports, summaries and presentations. When presenting geographic data, IDEA uses maps or dashboards created with Esri’s ArcPro and ArcOnline enterprise software.

Once names and contact details were available, CEO and PhillyCounts outreach teams reached out to families via mail, phone calls and text messages to notify them that they could claim Child Tax Credit payments and qualify for free tax preparation.

IDEA’s future goals include “working with public and private partners to explore new data-sharing opportunities that improve data completeness and accuracy,” Leach said, which will help the office achieve its goal. to serve local residents in the best way.

IDEA is negotiating with the state to access more data such as income and entitlement information, Moore said. The bureau is also interested in working with hospital systems to acquire emergency room data, but any discussions are still preliminary.

]]>
Imaging strategy and digital mastery in health informatics – https://webyantram.com/imaging-strategy-and-digital-mastery-in-health-informatics/ Tue, 13 Sep 2022 04:28:54 +0000 https://webyantram.com/imaging-strategy-and-digital-mastery-in-health-informatics/ Chris Jenkins, Senior Vice President at Healthlink Advisors How you manage imagery says a lot about your organization’s digital maturity The ongoing digitalization of the industry continues to disrupt business and service models across all sectors. Even though healthcare in the United States has traditionally lagged behind other fields in technological adaptation, there has always […]]]>
Chris Jenkins, Senior Vice President at Healthlink Advisors

How you manage imagery says a lot about your organization’s digital maturity

The ongoing digitalization of the industry continues to disrupt business and service models across all sectors. Even though healthcare in the United States has traditionally lagged behind other fields in technological adaptation, there has always been a significant and far-reaching change in the way healthcare operates, which is driven by the IT infrastructure.

From the slow and often frustrating rise of the EHR to the lightning-fast deployment of telehealth services in a crisis, operational capability is increasingly influenced by digital literacy.

Exploring one aspect of this dynamic highlights the broader global trend: medical imaging strategy is a particularly relevant example of the new paradigm in action.

Modern imagery

You can trace the profound value and impact of medical imaging back to the late 1800s and the advent of X-rays. Computers multiplied this utility exponentially. At the dawn of the digital age, healthcare professionals were actively innovating and embracing digital technologies to better harness the power of imaging. In 1982, even before the launch of the World Wide Web, the first Electronic Picture Archiving and Communication Systems (PACS) were in the works!

Since then, many systems “have been developed with the goal of providing cost-effective storage, rapid image retrieval, access to acquired images with multiple modalities, and simultaneous access across multiple sites.” In short, modern imaging systems are an integral part of modern healthcare.

Over the years, in order to meet the needs of emerging or expanding diagnostic departments for specialized workflows (e.g., radiology, cardiology, mammography, pathology, dermatology, etc.), many healthcare IT implementations simply added new imaging technologies on a case-by-case basis. These ad hoc practices have resulted in balkanized imagery data silos, unnecessary complexity, and disjointed imagery ecosystems.

Inconsistent technologies and workflows wreak havoc on IT stability and organizational productivity. Not to mention the cost of maintaining fragmented imaging systems, clinicians must manage an increasing amount of medical imaging data from a range of devices in diverse care settings using a myriad of different applications and processes. This “Frankenimaging” problem affects the ability to integrate, user satisfaction, and ultimately the patient experience…but it can be fixed.

Enterprise imaging in healthcare

Adopting an enterprise imaging strategy can mitigate these issues and improve system productivity, as well as the clinician and patient experience. Just as the EHR has evolved as the connected digital platform for patient records and the flow of critical data in a healthcare system, imaging data management must also mature to meet changing standards and modalities. Enterprise imaging strategy must be geared towards future-proofing the delivery of medical imaging securely through the right channel, with the right context, at the right time, without friction.

Principles of enterprise imaging strategies ideally optimize data management security and consistency, centralize and standardize image capture, collection, formatting, storage, exchange and analysis capabilities across the organization – and integrate with electronic health records. Depending on the size, scope, and service area of ​​the healthcare organization, enterprise imaging strategy objectives may include:

– Manage high volume varieties of internal imaging requirements as well as high demand external referrals and emerging services

– Decommissioning unsustainable departmental imaging data silos and inextensible legacy imaging systems

– Evaluate new architectures and technology enhancements specific to specialty imaging (e.g., breast, cardiac, vascular, etc.)

– Analyze modernization approaches (e.g., cloud-native versus hybrid infrastructure), integration requirements, and technology costs

– Leverage the use of cloud technologies, enhanced clinical analytics and AI on imaging data to give healthcare teams a more holistic view of patients

– Predicting the impact of value-based care and the convergence of clinical imaging with virtual and home care

– Positioning the organization to deliver an enhanced clinical team experience and the ability to respond to changing consumer demands

Digital mastery

Today, a desirable enterprise imaging strategy can serve as a solid roadmap to digital mastery for a healthcare organization. Formulating an effective clinical governance structure for the process is essential to ensure that stakeholder needs are met, capacities remain intact and expectations are managed.

When executed wisely, organizations can expect to achieve a simplified architecture for their digital imaging environment and a better experience for clinicians, as well as improved security – and even cost savings.

These are excellent characteristics of digital maturity.


About Chris Jenkins
Chris Jenkins is a senior vice president at Healthlink Advisors, a healthcare consulting firm committed to improving clinical innovation, business systems, and IT strategy, delivery, and operations. Health care.

]]>
The presidential secretariat clarifies that no leak of presidential data has occurred https://webyantram.com/the-presidential-secretariat-clarifies-that-no-leak-of-presidential-data-has-occurred/ Sun, 11 Sep 2022 05:44:37 +0000 https://webyantram.com/the-presidential-secretariat-clarifies-that-no-leak-of-presidential-data-has-occurred/ TEMPO.CO, Jakarta – Head of the Presidential Secretariat, Heru Budi Hartono, has made it clear that no presidential documents or letters have been leaked on the internet, contrary to claims made by some members of the public. “The Secretary of State will clarify this soon. There have been no leaks of presidential letters,” Hartono said […]]]>

TEMPO.CO, Jakarta – Head of the Presidential Secretariat, Heru Budi Hartono, has made it clear that no presidential documents or letters have been leaked on the internet, contrary to claims made by some members of the public.

“The Secretary of State will clarify this soon. There have been no leaks of presidential letters,” Hartono said here on Saturday, in response to reports circulating on social media that President Joko’s letters and documents Widodo (Jokowi) were allegedly leaked by a hacker.

Information alleging that presidential letters and top-secret letters from the National Intelligence Agency (BIN) were leaked is a hoax, he noted, adding that spreading the hoax is a violation of the law on electronic information and transactions.

“I have to say this is a violation of the Electronic Information and Transactions Act. I believe law enforcement will take legal action and look for the suspect,” he said.

Earlier, a hacker, known as the username Bjorka, claimed to have hacked presidential data and obtained presidential letters and top secret documents from the intelligence agency.

Bjorka’s claim was later shared by a Twitter account whose tweet went viral and became a trending topic on social media until Saturday morning.

The tweet, conveying Bjorka’s claim, said presidential letters and top secret BIN documents had been leaked.

Last August, the same hacker claimed to have obtained the data of 1,304,401,300 registered SIM card users, including their population ID number, phone number, cell operator name and date. recording. Bjorka also claimed to have shared two million data samples for free.

In response to recurring reports of hacking and leaking of personal data, the Director General of Applications and IT at the Ministry of Communication and Information, Semuel Abrijani Pangerapan, pointed out that the ministry and its stakeholders have committed to remedying the alleged data breach.

He also reminded cellular carriers and electronic system operators to do quick checks for any indication of data leaks.

“Each provider should have the ability to mitigate and (implement) security (measures), maintain confidentiality, mitigate risk in the event of (data) leakage, (know) what doesn’t have to be assembled – that’s what organizers always have to do,” he said.

ANTARA

Click here to get the latest news from Tempo on Google News

]]>
Updates to Optimizely, HubSpot, metadata, and more. https://webyantram.com/updates-to-optimizely-hubspot-metadata-and-more/ Fri, 09 Sep 2022 13:44:57 +0000 https://webyantram.com/updates-to-optimizely-hubspot-metadata-and-more/ The editors of Solutions Review have compiled a list of the best MarTech news from the week of September 9, 2022. This roundup features news and updates from top CRM and marketing technology brands like Optimizely, HubSpot, Metadata, and Moreover. Keep an eye on the most relevant RCMP and MarTech news can take time. Accordingly, […]]]>

The editors of Solutions Review have compiled a list of the best MarTech news from the week of September 9, 2022. This roundup features news and updates from top CRM and marketing technology brands like Optimizely, HubSpot, Metadata, and Moreover.

Keep an eye on the most relevant RCMP and MarTech news can take time. Accordingly, our editorial team aims to summarize the top headlines of the week in the marketing technology landscape. Solutions Review editors will compile a weekly roundup of vendor product news, mergers and acquisitions, venture capital funding, talent acquisition, and other notable MarTech news. With that in mind, here are some of the top MarTech news from September 9th.

Our Free CRM Buyer’s Guide helps you evaluate the best solution for your use case and profiles the leading vendors in the market.

Top MarTech News for the week of September 9


Heap details its new one-click session replay capabilities

Heap, a digital information solutions provider, has launched fully integrated one-click session replay capabilities to help businesses understand pain points and streamline customer experiences. These new features leverage product analytics and session replay tools to capture every click, swipe, and submission in a customer’s digital journey. Businesses can also use these tools to identify key sales funnel events, define events with visual tagging tools, watch instant replays to ensure their teams are tracking the right data, improve customer support , etc.


HubSpot unveils new features for its CRM platform at INBOUND 2022

HubSpot announced several new features and updates for its leading customer relationship management (CRM) platform for scaling companies at its annual INBOUND event. New features include payment schedule tools, data management enhancements, WhatsApp integration, customer journey analytics, a fully connected service desk and other tools to help businesses create meaningful connections with customers. HubSpot also shared details about connect.com, a new community for growth professionals to help develop stronger relationships with peers in their community.


Nektar Announces General Availability of Its AI-Based Data Capture Solution

Nektar.ai, a revenue operations (RevOps) platform provider, has launched its AI-powered activity capture and intelligence solution into general availability. The Company’s no-code platform and new AI-powered solution will help users improve productivity, improve pipeline visibility, forecast revenue and add contextual data on revenue activity via email, chat, calendar, social media, and other customer journey touchpoints in a CRM platform. Nektar is backed by B Capital Group, Nexus Ventur Partners and 3One4 Capital.


Metadata.io Acquires Reactful to Help B2B Marketers Improve Website Engagement

Metadata.io, a marketing operating system for B2B companies, announced the acquisition of Reactful, a real-time website personalization engine. Reactful’s website personalization and account-based marketing (ABM) capabilities will be integrated into Metadata’s marketing operating system solution. These New Features Will Help Metadata’s B2B Marketing Clients Increase Visitor Engagement, Maximize Web Traffic Conversion, Drive Revenue, Personalize Experiences, Reduce Time Spent on Repetitive Tasks, and Increase Bids with targeted ads that increase ROI.


Optimizely launches its experimentation platform on Google Cloud Marketplace

Optimizely, a digital experience platform (DXP) provider, has released its Web Experimentation and Full Stack solution on Google Cloud Marketplace. With these tools, companies can use “science-based data” to take the guesswork out of developing personalized engagements and improving customer experience (CX). The launch of these products on Google Cloud Marketplace is part of Optimizely’s multi-year strategic partnerships announced with Google Cloud earlier in 2022. The two brands plan to continue to coordinate a go-to-market and sales execution strategy to provide customers a scalable solution. experimentation solutions.


Rossum Launches New Email Automation Features for Its Intelligent Document Processing (IDP) Solution

Rossum, a cloud-native Intelligent Document Processing (IDP) solution provider, has unveiled new email automation features for its platform to help users manage document communication tasks. With these features, Rossum users can automate notifications, acknowledge receipt, route payment/invoice information, provide status updates, and more. This will streamline data entry and reduce approval time, making it easier for companies to keep their suppliers informed, maintain internal team alignment, and manage responses with a unified brand voice.


To be considered for future news digests, send your announcements to wjepma@solutionsreview.com.


Guillaume Jepma
Latest posts by William Jepma (see everything)

]]>
Libraries offer workshops on data management, research reproducibility in R https://webyantram.com/libraries-offer-workshops-on-data-management-research-reproducibility-in-r/ Tue, 06 Sep 2022 21:16:27 +0000 https://webyantram.com/libraries-offer-workshops-on-data-management-research-reproducibility-in-r/ UNIVERSITY PARK, Pennsylvania – Beginning September 19, Penn State University Libraries’ Department of Research Computing and Publishing will offer a series of four online workshops on data management and research reproducibility in A. The purpose of these sessions is to introduce resources and expertise available in university libraries. R is a statistical programming language that […]]]>

UNIVERSITY PARK, Pennsylvania – Beginning September 19, Penn State University Libraries’ Department of Research Computing and Publishing will offer a series of four online workshops on data management and research reproducibility in A. The purpose of these sessions is to introduce resources and expertise available in university libraries.

R is a statistical programming language that allows users to manage data sets, manage analysis workflows, conduct statistical analysis, and create data visualizations. This series of workshops will provide hands-on training in fundamental coding skills, data visualization, and data management strategies in R to support research reproducibility. Attendees can expect to learn how to package data into an analysis-ready format, use R packages and connections to manage R projects, and create data visualizations using an R package called ggplot2.

Workshops are free and open to Penn State graduate students, postdoctoral researchers, faculty, and staff. No prior knowledge of R is required.

Since the workshops build on each other, participants must complete them in order. The first workshop, “The Basics of R and RStudio”, is optional but recommended for those who have never used R or RStudio before.

Participants must have access to a computer with a Mac, Linux or Windows operating system and be able to download the R, RStudio and Git applications. Registrants will receive instructions on how to access these applications prior to the start of the workshops.

All workshops will take place via Zoom. Access information will be distributed via email prior to the start of the series.

Registration is required before September 12. Click here to subscribe to the series.

For more information, contact Research Informatics and Publishing at repub@psu.edu.

Workshop schedule

Basics of R and RStudio

Sept. 19, 2 p.m. to 4 p.m.

Data processing in R

Sept. 22, 2 p.m. to 4 p.m.

Data management and research reproducibility in RStudio

Sept. 29, 2 p.m. to 4 p.m.

Data Visualization in R

Oct. 6, 2 p.m. to 4 p.m.

]]>
5 Ways Data Scientists Can Advance Their Careers https://webyantram.com/5-ways-data-scientists-can-advance-their-careers/ Mon, 05 Sep 2022 05:44:02 +0000 https://webyantram.com/5-ways-data-scientists-can-advance-their-careers/ Data and machine learning scientists are joining enterprises with the promise of cutting-edge ML models and technologies. But often they spend 80% of their time cleaning data or dealing with data riddled with missing values ​​and outliers, a frequently changing schema, and massive load times. The gap between expectations and reality can be huge. While […]]]>

Data and machine learning scientists are joining enterprises with the promise of cutting-edge ML models and technologies. But often they spend 80% of their time cleaning data or dealing with data riddled with missing values ​​and outliers, a frequently changing schema, and massive load times. The gap between expectations and reality can be huge.

While data scientists may initially be excited to tackle advanced insights and models, that enthusiasm quickly deflates amid daily schema changes, tables that stop updating, and other breaking surprises. models and dashboards silently.

While “data science” applies to a range of roles, from analyzing products to putting statistical models into production, one thing is generally true: data scientists and ML engineers often find themselves at the end of the data pipeline. They are consumers of data, pulling it from data warehouses or S3 or other centralized sources. They analyze data to help make business decisions or use it as training inputs for machine learning models.

In other words, they are impacted by data quality issues but are often not empowered to move up the pipeline earlier to fix them. So they write a ton of defensive data preprocessing into their work or move on to a new project.

If this scenario sounds familiar, you don’t have to give up or complain that upstream data engineering is off for good. Do like a scientist and be experimental. You are the last step in the pipe and release of the models, which means you are responsible for the outcome. While it may seem terrifying or unfair, it’s also a great opportunity to shine and make a big difference in your team’s business impact.

Here are five things that data scientists and ML analysts get out of defense mode and ensure that even if they don’t create data quality issues, they will prevent them from impacting data-dependent teams. .

1. Increase confidence through better monitoring of data quality

Business leaders are hesitant to make decisions based solely on data. A KPMG report showed that 60% of companies do not feel very confident in their data and that 49% of management teams do not fully support internal data and analytics strategy.

Good data scientists and ML engineers can help by increasing the accuracy of data and then integrating it into dashboards that help key decision makers. By doing so, they will have a direct positive impact. But manually checking data for quality issues is error-prone and a huge drag on your speed. It slows you down and makes you less productive.

Using data quality tests (e.g. with dbt testing) and data observability helps you ensure you know about quality issues before your stakeholders, earning their trust in you (and the data) over time.

2. Establish SLAs to avoid confusion and blame

Data quality issues can easily lead to a boring blame game between data science, data engineering, and software engineering. Who broke the data? And who knew? And who will fix it?

But when bad data comes into the world, it’s everyone’s fault. Your stakeholders want the data to work so the business can move forward with an accurate picture.

Good data scientists and ML engineers reinforce accountability for all stages of the data pipeline with service level agreements. SLAs define data quality in quantifiable terms, assigning stakeholders who must intervene to resolve issues. SLAs help avoid the blame game altogether.

3. Faster analysis through experiments

Trust is so fragile and it quickly erodes when your stakeholders catch mistakes and start assigning blame. But what about when they fail to detect quality issues? Then the model is bad, or bad decisions are made. Either way, the business suffers.

For example, what if you have a single entity registered as “Dallas-Fort Worth” and “DFW” in a database? When testing a new feature, everyone in “Dallas Fort-Worth” is shown as Variant A and everyone in “DFW” is shown as Variant B. No one enters the gap. You cannot conclude users in the Dallas Fort-Worth area – your test was rejected and the groups were not properly randomized.

Pave the way for better experimentation and analysis with a higher quality database. By using your expertise to improve quality, your data will become more reliable and your sales teams will be able to perform meaningful tests. The team can focus on What to be tested afterwards instead of doubting the results of the tests.

4. Become the point of contact for data quality

Trust in data starts with you; If you don’t master reliable, high-quality data, you will carry that burden in your interactions with the product and your colleagues.

So claim your position as the point of contact for data quality and ownership. You can help define quality and delegate responsibility for solving different problems. Remove friction between data science and engineering.

If you can lead the charge to define and improve data quality, you’ll impact nearly every other team in your organization. Your teammates will appreciate the work you do to reduce organization-wide headaches.

5. Minimize data waste

Incomplete or unreliable data can result in terabytes of wasted data. This data resides in your warehouse and is included in queries that incur compute costs. Poor quality data can be a major drag on your infrastructure bill as it is repeatedly included in the filtering process.

Identifying complex data is a way to immediately create value for your organization, especially for pipelines that see heavy traffic for product analytics and machine learning. Recollect, reprocess or impute and clean up existing values ​​to reduce storage and computational costs.

Keep track of the tables and data you cleanse, as well as the number of queries run against those tables. It’s essential to let your team know how many questions are no longer working on junk data and how many GB of storage have been freed up for better things.

All data professionals, seasoned veterans and newcomers should be indispensable parts of the organization. You add value by appropriating more reliable data. Although the tools, algorithms, and techniques for analysis are becoming more sophisticated, the input data often isn’t — it’s still unique and company-specific. Even the most sophisticated tools and models don’t work well with bad data. The impact of data science can be a boon to your entire organization through the five steps above. Everyone wins when you improve the data your teams depend on.

What techniques can help data scientists and ML engineers streamline the data management process? Tell us about Facebook, Twitterand LinkedIn. We would like to know!

LEARN MORE ABOUT DATA QUALITY MANAGEMENT

]]>
Nord Pool Cuts Cost-Rising Historical Price Data Access Fees | New https://webyantram.com/nord-pool-cuts-cost-rising-historical-price-data-access-fees-new/ Thu, 01 Sep 2022 13:12:00 +0000 https://webyantram.com/nord-pool-cuts-cost-rising-historical-price-data-access-fees-new/ “The historical data on hourly prices has been moved to our data portal (paid – ed.), mainly to (enable us to) cover the costs of maintaining the database, which continues to grow over time , as well as to maintain the quality of the data,” Ingrid Arus, Nord Pool’s market manager for the Baltic region, […]]]>

“The historical data on hourly prices has been moved to our data portal (paid – ed.), mainly to (enable us to) cover the costs of maintaining the database, which continues to grow over time , as well as to maintain the quality of the data,” Ingrid Arus, Nord Pool’s market manager for the Baltic region, told ERR.

Arus explained that until now, the cost of managing data management has been covered by Nord Pool member fees.

“In a situation where there has been a surge in interest in electricity market data outside of our members, it was no longer appropriate to provide free access to this data for all on one foot. of equality,” said Arus.

According to Arus, the Nord Pool webpage will continue to show hourly prices for the daily market, dating back to early 2021. However, old price data has now been moved to the site and can only be viewed through Nord Pool data. portal, where historical data has been visible since 2012. Access to historical price data became a paid service from June 7, costing €600 per year for up to three users.

“It is important to point out that historical hourly price data is also published on the ENTSO-E transparency platform, where you can see data for the whole of the European Union, and, generally, on the system website (national) operators, (who) in the Estonian context (is) Elering,” said Arus.

Arus was also keen to add that the range of data that can be viewed and downloaded for a fee “is extremely wide”.

“In particular, there has been a lot of interest in this service from portals and news agencies, which use our data to offer services to their customers. As well as private companies providing market analysis of the electricity or other electricity market-related services to their customers,” Arous said. “In general, private customers do not use our data services, as price information is publicly available elsewhere.”

According to a Nord Pool spokesperson, the company has made exceptions to the fee for public authorities, who need the data to fulfill their responsibilities. Thus, the statistical office (Statistics Estonia) has free access to the entire database, while Nord Pool members can also consult the data without paying any additional costs.

Sikkut: ‘Tax wall’ not in interest of electricity consumers

Economic Affairs and Infrastructure Minister Riina Sikkut has condemned Nord Pool’s decision to restrict the availability of free access to its historical data.

“In the current situation, I see a need to increase the transparency of Nord Pool so that the interests of our consumers are better protected. It is difficult to see how the decision to put historical market data behind a so-called “tax wall “would further those interests in any way,” Sikkut told ERR through a spokesperson.

Sikkut said that in his view, the pricing system on the Nord Pool power exchange should also be reviewed, in addition to levels of market transaction transparency. “The origin and volume of bids could be made public, for example,” she said.

Sikkut also announced that it would make a formal proposal to the European Commission and Nord Pool on the need to increase market transparency and improve control procedures.

Estonia has no stake in Nord Pool

According to Sikkut, Estonia has no participation in the Nord Pool electricity market. However, in 2012, when Estonia opened its electricity market and integrated its electricity system with the Nordic countries, Elering acquired a 2% share in Nord Pool.

At the time, this allowed Elering to get information about the functioning of the Nordic Common Market straight from the source. TSOs from the Baltic countries held a seat on the board of directors of Nord Pool, a post which they rotated among themselves.

The electricity market processes were then brought under EU law and the market was opened up to competition between the different power exchanges, with the network operators therefore playing no role in the organisation. power exchanges.

Neither power exchanges nor their owners have the ability to control day-ahead or intraday markets, as this would require changes at EU level.

According to the spokesperson for the Ministry of Economic Affairs, Elering sold its share in Nord Pool in 2021, to ensure neutrality. Since Elering is one of many power exchange owners operating in Estonia, remaining on the board of Nord Pool would be seen as an unfair advantage in the market.

Follow ERR News on Facebook and Twitter and never miss an update!

]]>