Search Results

Search found 26592 results on 1064 pages for 'information presentation'.

Page 55/1064 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • Windows Phone 7 + Azure.and a couple of nuggets

    I recently gave a talk about Windows Phone 7 at a Bizpark Camp in San Francisco.  The camp had two focuses: Azure and Windows Phone 7.  My presentation covered WP7 portion of the camp.  During my presentation I highlighted the phone platform and talked about some of the differentiators from design, technology and a business standpoint.    Whenever I watch presentations or go to tech meet-ups I feel like I get the most value when I can walk away with a few nuggets that I wouldnt necessarily have known about otherwise.  That said, I tried to add a few resources into my presentation that should be helpful when building WP7 apps.      Nuggets Seeing that the camp was focused on Azure and WP7 I decided to augment my presentation with a code sample.  The intention was to give some insight on how to approach building WP7 applications that talk to Azure.  Some colleges of mine here at Clarity have posted a sample on codeplex focused on getting up and running with WP7 and Azure..you can check it out HERE.   The project is not a hello world app, and is targeted at people who have some experience with the platform and a working knowledge of silverlight. Also, during my presentation I mentioned some limitations with the current phone sdk.  Our sample code on contains work-abounds for the following: #1 Panorama Control #2  Tilt effect #3   Animating Frame #4   Sample architecture (leveraging MVVM light)  and coding patterns.  Note: For the sample phone project we used an azure token that will expire in the next couple of months.  When that happensin the downloads section of the codeplex project there a link to a local development fabric that can be used for local development Presentation Admittedly, the slide deck is pretty design heavy, and doesnt contain much text.  This was semi-intentional to encourage people to come out to the camps and hear it first hand.  There is some additional info found the notes of the PPTX.  Dont forget to check out the full presentation at the Chicago Bizspark Camp on May 21st here at the Clarity Office.  Or on June 4th in  Los Angeles. You can DOWNLOAD the Slides here:  PPTX  |  PDF or view it inline below.  View more presentations from eklimcz. Cheers! Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Phone 7 + Azure.and a couple of nuggets

    I recently gave a talk about Windows Phone 7 at a Bizpark Camp in San Francisco.  The camp had two focuses: Azure and Windows Phone 7.  My presentation covered WP7 portion of the camp.  During my presentation I highlighted the phone platform and talked about some of the differentiators from design, technology and a business standpoint.    Whenever I watch presentations or go to tech meet-ups I feel like I get the most value when I can walk away with a few nuggets that I wouldnt necessarily have known about otherwise.  That said, I tried to add a few resources into my presentation that should be helpful when building WP7 apps.      Nuggets Seeing that the camp was focused on Azure and WP7 I decided to augment my presentation with a code sample.  The intention was to give some insight on how to approach building WP7 applications that talk to Azure.  Some colleges of mine here at Clarity have posted a sample on codeplex focused on getting up and running with WP7 and Azure..you can check it out HERE.   The project is not a hello world app, and is targeted at people who have some experience with the platform and a working knowledge of silverlight. Also, during my presentation I mentioned some limitations with the current phone sdk.  Our sample code on contains work-abounds for the following: #1 Panorama Control #2  Tilt effect #3   Animating Frame #4   Sample architecture (leveraging MVVM light)  and coding patterns.  Note: For the sample phone project we used an azure token that will expire in the next couple of months.  When that happensin the downloads section of the codeplex project there a link to a local development fabric that can be used for local development Presentation Admittedly, the slide deck is pretty design heavy, and doesnt contain much text.  This was semi-intentional to encourage people to come out to the camps and hear it first hand.  There is some additional info found the notes of the PPTX.  Dont forget to check out the full presentation at the Chicago Bizspark Camp on May 21st here at the Clarity Office.  Or on June 4th in  Los Angeles. You can DOWNLOAD the Slides here:  PPTX  |  PDF or view it inline below.  View more presentations from eklimcz. Cheers! Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance?

    - by Varun
    Question: OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance? Answer: OBI Caching only speeds up what has been seen before. An In-memory data structure generated by the summary advisor is optimized to provide maximum value by accounting for the expected broad usage and drilldowns. It is possible to adapt the in-memory data to seasonality by running the summary advisor on specific workloads. Moreover, the in-memory data is created in an analytic database providing maximum performance for the large amount of memory available.

    Read the article

  • What would you use to keep track of all the random information about your LAN/Enviroment?

    - by agnul
    Say you work in your average anarchy... no appointed IT guy, no Office-Manager whatsoever, just developers working in free-for-all mode and a couple of non technical people around for admin (non IT) stuff. What would you use to keep track of Hardware / Software inventory Backups of OS images, Drivers, and all that LAN configuration information Passwords for all the systems Currently I'm thinking of having some sort of wiki + web server on our only ''server-class'' machine, but then I fear I'd be the only one who would go through the hassle of adding information in the wiki... So, have you been through that? What would you suggest?

    Read the article

  • How setup federated SQL data to display limited information to a Web server in the DMZ?

    - by Pcav
    I have a SQL server behind a firewall. I need to push some limited SQL 2005 information to a Web Server in the DMZ so that I do not have to let database queries come all the way into the database server on our internal network. I want to push a small amount of dynamic data to a Web server in the DMZ and lock it down so that our hosted website does not need to come into the internal network for information; I want to put a server in the DMZ that will be the only connection allowed to the SQL database. This DMZ server will be the only server that can have any sort of connection to the back-end database so the hosting provider just pull the data from our server in the DMZ...

    Read the article

  • WebCenter Customer Spotlight: Indecopi

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryIndecopi Optimizes Patent Approval Management and Accelerates Customer Service Times by 40% Indecopi is a decentralized public agency that promotes the country’s markets and protects consumer rights. It promotes fair and honest competition and safeguards all forms of intellectual property through three directorates: Author’s Rights, Inventions and New Technologies, and Trademarks. The business challenge was to unify the agency’s technology infrastructure to create a business process management strategy, consolidate the organization’s Web platform and improve and automate information services for citizens and businesses, and streamline patent procedures by digitizing documentation. Indecopi optimized patent information services , organized information, provided around-the-clock online access to users, and developed a Web site that provides internal and external users access to DIN information, such as patent documentation, through a user-friendly interface. Indecopi achieved impressive business result by reducing use of paper files by 50%, accelerating transaction approvals,  reduce nonvalue-added activities by 85% and  accelerated customer service times by 40%. Company OverviewPeru’s Instituto Nacional de Defensa de la Competencia y de la Protección de la Propiedad Intelectual (Indecopi), the National Institute for the Defense of Competition and Protection of Intellectual Property, is a decentralized public agency that promotes the country’s markets and protects consumer rights. It promotes fair and honest competition and safeguards all forms of intellectual property through three directorates: Author’s Rights, Inventions and New Technologies, and Trademarks. Business ChallengesIndecopi's challenge was to unify the agency’s technology infrastructure to create a business process management strategy, starting with the Directorate of Inventions and New Technologies (DIN), consolidate the organization’s Web platform to meet new demands for software and process development, such as for patent applications, and improve and automate information services for citizens and businesses and streamline patent procedures by digitizing documentation. Solution DeployedIndecopi optimized patent information services with Oracle Business Process Management, automating processes to deliver expedient searches, and to create new services, such as alerts to users. They organized information and provided around-the-clock online access to users with Oracle WebCenter Content. In addition they used Oracle WebLogic Server to develop a Web site that provides internal and external users access to DIN information, such as patent documentation, through a user-friendly interface. Business Results Indecopi achieved impressive business results Reduced use of paper files by 50% Accelerated transaction approvals  reduce nonvalue-added activities, such as manual document copying to obtain patents, by 85% Accelerated customer service times by 40% by optimizing procedures, such as searches and online information related to granting patents “Oracle Business Process Manager has been a paradigm shift in process management. By digitalizing and automating our patents information services, we can now manage everything in the simplest way possible, expanding our options for the creation of new services.” Sergio Rodríguez, Assistant Director, Inventions and New Technologies Directorate, Instituto Nacional de Defensa de la Competencia y la Propiedad Intelectual Additional Information Indecopi Customer Snapshot Oracle WebCenter Content

    Read the article

  • Why is XML Deserilzation not throwing exceptions when it should.

    - by chobo2
    Hi Here is some dummy xml and dummy xml schema I made. schema <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.domain.com" xmlns="http://www.domain.com" elementFormDefault="qualified"> <xs:element name="vehicles"> <xs:complexType> <xs:sequence> <xs:element name="owner" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="2" /> <xs:maxLength value="8" /> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="Car" minOccurs="1" maxOccurs="1"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="Truck"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="SUV"> <xs:complexType> <xs:sequence> <xs:element name="Information" type="CarInfo" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="CarInfo"> <xs:sequence> <xs:element name="CarName"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarPassword"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="50"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:element name="CarEmail"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:schema> xml sample <?xml version="1.0" encoding="utf-8" ?> <vehicles> <owner>Car</owner> <Car> <Information> <CarName>Bob</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Bob2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Car> <Truck> <Information> <CarName>Jim</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jim2</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </Truck> <SUV> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> <Information> <CarName>Jane</CarName> <CarPassword>123456</CarPassword> <CarEmail>[email protected]</CarEmail> </Information> </SUV> </vehicles> Serialization Class using System; using System.Collections.Generic; using System.Xml; using System.Xml.Serialization; [XmlRoot("vehicles")] public class MyClass { public MyClass() { Cars = new List<Information>(); Trucks = new List<Information>(); SUVs = new List<Information>(); } [XmlElement(ElementName = "owner")] public string Owner { get; set; } [XmlElement("Car")] public List<Information> Cars { get; set; } [XmlElement("Truck")] public List<Information> Trucks { get; set; } [XmlElement("SUV")] public List<Information> SUVs { get; set; } } public class CarInfo { public CarInfo() { Info = new List<Information>(); } [XmlElement("Information")] public List<Information> Info { get; set; } } public class Information { [XmlElement(ElementName = "CarName")] public string CarName { get; set; } [XmlElement("CarPassword")] public string CarPassword { get; set; } [XmlElement("CarEmail")] public string CarEmail { get; set; } } Now I think this should all validate. If not assume it is write as my real file does work and this is what this dummy one is based off. Now my problem is this. I want to enforce as much as I can from my schema. Such as the "owner" tag must be the first element and should show up one time and only one time ( minOccurs="1" maxOccurs="1"). Right now I can remove the owner element from my dummy xml file and deseriliaze it and it will go on it's happy way and convert it to object and will just put that property as null. I don't want that I want it to throw an exception or something saying this does match what was expected. I don't want to have to validate things like that once deserialized. Same goes for the <car></car> tag I want that to appear always even if there is no information yet I can remove that too and it will be happy with that. So what tags do I have to add to make my serialization class know that these things are required and if they are not found throw an exception.

    Read the article

  • Does waitpid yield valid status information for a child process that has already exited?

    - by dtrebbien
    If I fork a child process, and the child process exits before the parent even calls waitpid, then is the exit status information that is set by waitpid still valid? If so, when does it become not valid; i.e., how do I ensure that I can call waitpid on the child pid and continue to get valid exit status information after an arbitrary amount of time, and how do I "clean up" (tell the OS that I am no longer interested in the exit status information for the finished child process)? I was playing around with the following code, and it appears that the exit status information is valid for at least a few seconds after the child finishes, but I do not know for how long or how to inform the OS that I won't be calling waitpid again: #include <assert.h> #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/wait.h> int main() { pid_t pid = fork(); if (pid < 0) { fprintf(stderr, "Failed to fork\n"); return EXIT_FAILURE; } else if (pid == 0) { // code for child process _exit(17); } else { // code for parent sleep(3); int status; waitpid(pid, &status, 0); waitpid(pid, &status, 0); // call `waitpid` again just to see if the first call had an effect assert(WIFEXITED(status)); assert(WEXITSTATUS(status) == 17); } return EXIT_SUCCESS; }

    Read the article

  • How to use javascript to get information from the content of another page (same domain)?

    - by hlovdal
    Let's say I have a web page (/index.html) that contains the following <li> <div>item1</div> <a href="/details/item1.html">details</a> </li> and I would like to have some javascript on /index.html to load that /details/item1.html page and extract some information from that page. The page /details/item1.html might contain things like <div id="some_id"> <a href="/images/item1_picture.png">picture</a> <a href="/images/item1_map.png">map</a> </div> My task is to write a greasemonkey script, so changing anything serverside is not an option. To summarize, javascript is running on /index.html and I would like to have the javascript code to add some information on /index.html extracted from both /index.html and /details/item1.html. My question is how to fetch information from /details/item1.html. I currently have written code to extract the link (e.g. /details/item1.html) and pass this on to a method that should extract the wanted information (at first just .innerHTML from the some_id div is ok, I can process futher later). The following is my current attempt, but it does not work. Any suggestions? function get_information(link) { var obj = document.createElement('object'); obj.data = link; document.getElementsByTagName('body')[0].appendChild(obj) var some_id = document.getElementById('some_id'); if (! some_id) { alert("some_id == NULL"); return ""; } return some_id.innerHTML; }

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Is this information about me as a programmer concise and good enough?

    - by Nick Rosencrantz
    I not only want you to review my resume but please tell me what you think Google means when they answered me: "We don't look at personal letters and we like your resume and we can recommend you internally but we need measurable experience. What is meant with "measurable" here? Do they mean like O(1) compared to O(n), selling an entire company, grades or what? This is what I sent: Curriculum vitae Nick Rosencrantz Competence: System development, web development Technical competence: Java, Javascript, HTML, XML, CSS, AJAX, PHP, SQL, Python Employments: 2012- Mobile Innovation AB System Developer IT consultant (Java programmer) 2011-2012 Bnano International Ltd System Developer Python programming in Google App Engine 2008-2009 Sweden Island AB System Developer Programming C++ and Java EE components 2003-2007 Studies Stockholm School of Economics During studies worked as network technician at Effnet AB 2000-2002 Jadestone AB System Developer System development in Java/J2EE. In 2001: KTH, Assistant. Teaching application server programming in Java Enterprise + weblogic + Informix. 1999-2000 Studies KTH 1996-1998 Spray.se System development, Researcher 1995-1995 Finance broker Backoffice work with financial instruments 1993-1994 Computer & Audio-Technical Systems AB Programming, sommer job Education/Courses: Stockholm School of Economics, Master of Science diploma, KTH, Computer Science undergraduate studies Languages Swedish, English, also some German and French Born 1973, Swedish citizen I also have a project-based CS which is several pages long but the above is about what I was aiming for in the beginning when I was looking for a job, now I have employment as an IT consultant in central Stockholm and I want to make my resume concise and also know what Google meant with their answer (It was a Swedish Google employee that via linkedin recruited from my Stockholm School of Economics groups since that is a small elite economics school where I took my M.Sc. and KTH is one of the largest universities in northern Europe so I sent her a link with my CV and she said she could promote me internally if I added "measurable experience" and I've been thinking for weeks what that may mean?

    Read the article

  • Tool to convert from XML to CSV and back to XML without losing information?

    - by Lanaru
    I would like to convert an XML file into CSV, and then be able to convert that CSV back into the original XML file. Is there a tool out there that makes this possible? The reason I want to do this is that I want to generate a ton of data to be stored in an xml file that my program will parse. I can easily generate a lot of data with a CSV file through the use of Excel macros, so that's why I'm looking for a tool to convert between these two formats. Any alternative suggestions are greatly appreciated.

    Read the article

  • Is it appropriate for a class to only be a collection of information with no logic?

    - by qegal
    Say I have a class Person that has instance variables age, weight, and height, and another class Fruit that has instance variables sugarContent and texture. The Person class has no methods save setters and getters, while the Fruit class has both setters and getters and logic methods like calculateSweetness. Is the Fruit class the type of class that is better practice than the Person class. What I mean by this is that the Person class seems like it doesn't have much purpose; it exists solely to organize data, while the Fruit class organizes data and actually contains methods for logic.

    Read the article

  • Can SAML use saleforce login information to log into another system inside a view?

    - by steve
    I want my sales people, whom use salesforce every day, to be able to view orders in a ecommerce system through a dashboard view in salesforce. The ecom is built and sitting on my web server but the sales reps dont like to log into too many things in one day so they are not using what I built them. I read recently that salesforce can use SAML but it was unclear as to what you can do with it. What I'd like, is to make a new dash board view that will open up the ecom inside of salesforce. The ecom uses a login system but if it is inside of saleforce would SAML automatically log into the ecom?

    Read the article

  • My father is a doctor. He is insisting on writing a database to store non-critical patient information, with no programming background

    - by Dominic Bou-Samra
    So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations.

    Read the article

  • Do you know some Information about train travel in China?

    - by user79989
    Me and my friend are planning to China travel next year.well train travel in China is an interesting experience, with the world's fastest train (guangzhou to wuhan), the word's highest train (in tibet) and the world's oldest working train (from Tonglio to Baotou in the north of China). Now travelling in China by train is not always easy.You can do a Hong Kong to Beijing Train trip, and buy those tickets online. But to be honest with you, most of that journey is pretty boring. The best part of it is going through northern Guangdong and Southern Hunan provinces.ChinaTour.com is a reliable China Travel Agency based in USA, which has specialized in inbound China travel for decades.

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • How to model the components of a non Information System?

    - by Adel C Kod
    So I am working on a project that's related to the Kernel code(specifically related to the TCP/IP stack of the kernel). I need to build some models to describe the functionality and components of my system. Initially I thought about Class Diagram, it can describe the general architecture of my system but it doesn't make sense since my code is VERY structured(written in standard C). I also thought about DFDs, they'd describe the processes of my system, and how the data is flowing. But they contain something which doesn't really fit in; data-storages. I have no databases here(at all). For the functionality, other team members suggested using Activity and Sequence diagrams, which is kinda okay with me, but what about the system components? So basically my question is; I want to describe the components of my system; what do you suggest as a meaningful diagram to follow? (Again, the project is a research low-level systems-oriented project with almost no user-interface at all)

    Read the article

  • What is the feature that imports URL information into facebook and google+?

    - by Michael
    What I am trying to do. I want to be able to import heading, short description and an image into my website, just using a link. I am using Joomla but can switch to any CMS, or try to build an extension myself. The problem is I dont know where to look for example code, or what to look for. To me it doesnt seem like too difficult a process. Also, the code must be out there, with a little tweaking I can probably use someone elses, but I dont even know what to search for.

    Read the article

  • Do I need to paste open source license information at the top of my webpage?

    - by Rich
    I'm developing a JavaScript application that uses several open source JavaScript projects. All their licenses have a phrase like "You must give any other recipients of the Work or Derivative Works a copy of this License". Does this mean I need to make a massive HTML comment at the top of my webpage with all the licences of the software that I use? I ask this question because I've never seen the source code of a webpage that does this.

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >