Search Results

Search found 94919 results on 3797 pages for 'new folder'.

Page 22/3797 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Certifications in the new Certify - March 2011 Update

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions March 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!.Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help!Japanese ???Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications.Certifications viewable in the new Certify SearchEnterprise PeopleTools Release 8.50, and Release 8.51 Added March 2011!Oracle DatabaseOracle Database OptionsOracle Database Clients (they apply to both 32-bit and 64-bit)Oracle BeehiveOracle Collaboration SuiteOracle E-Business Suite, Now with Release 11i & 12!Oracle Siebel Customer Relationship Management (CRM)Oracle Governance, Risk, and Compliance ManagementOracle Financial ServicesOracle HealthcareOracle Life SciencesOracle Enterprise Taxation ManagementOracle RetailOracle UtilitiesOracle Cross ApplicationsOracle PrimaveraOracle AgileOracle Transportation Management (G-L)Oracle Value Chain PlanningOracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2)Certifications viewable in Classic CertifyClassic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications > QuickLinks will take you there.Enterprise PeopleTools Release 8.49 (Coming Soon)Enterprise ManagerOther ResourcesSee the Tips and Tricks for the new Certify.Watch the 4 minute introduction to the new certify.Or how to get the most out of certify with a advanced searching and features demo with the new certify.

    Read the article

  • Tips on how to notify a user of new features in your game

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • Tips on how to notify a user of new features in your game (Android)

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • Linq to xml not able to add new elements

    - by Fore
    We save our xml in a "text" field in the database. So first I check if it exist any xml, if not I create a new xdocument, fill it with the necessary xml. else i just add the new element. Code looks like this: XDocument doc = null; if (item.xmlString == null || item.xmlString == "") { doc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), new XElement("DataTalk", new XAttribute(XNamespace.Xmlns + "xsi", "http://www.w3.org/2001/XMLSchema-instance"), new XAttribute(XNamespace.Xmlns + "xsd", "http://www.w3.org/2001/XMLSchema"), new XElement("Posts", new XElement("TalkPost")))); } else { doc = XDocument.Parse(item.xmlString); } This is working alright to create a structure, but then the problem appears, when I want to add new TalkPost. I get an error saying incorrectly structured document. The following code when adding new elements: doc.Add(new XElement("TalkPost", new XElement("PostType", newDialog.PostType), new XElement("User", newDialog.User), new XElement("Customer", newDialog.Customer), new XElement("PostedDate", newDialog.PostDate), new XElement("Message", newDialog.Message)));

    Read the article

  • New in MySQL Enterprise Edition: Policy-based Auditing!

    - by Rob Young
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For those with an interest in MySQL, this weekend's MySQL Connect conference in San Francisco has gotten off to a great start. On Saturday Tomas announced the feature complete MySQL 5.6 Release Candidate that is now available for Community adoption and testing. This announcement marks the sprint to GA that should be ready for release within the next 90 days. You can get a quick summary of the key 5.6 features here or better yet download the 5.6 RC (under “Development Releases”), review what's new and try it out for yourself! There were also product related announcements around MySQL Cluster 7.3 and MySQL Enterprise Edition . This latter announcement is of particular interest if you are faced with internal and regulatory compliance requirements as it addresses and solves a pain point that is shared by most developers and DBAs; new, out of the box compliance for MySQL applications via policy-based audit logging of user and query level activity. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One of the most common requests we get for the MySQL roadmap is for quick and easy logging of audit events. This is mainly due to how web-based applications have evolved from nice-to-have enablers to mission-critical revenue generation and the important role MySQL plays in the new dynamic. In today’s virtual marketplace, PCI compliance guidelines ensure credit card data is secure within e-commerce apps; from a corporate standpoint, Sarbanes-Oxely, HIPAA and other regulations guard the medical, financial, public sector and other personal data centric industries. For supporting applications audit policies and controls that monitor the eyes and hands that have viewed and acted upon the most sensitive of data is most commonly implemented on the back-end database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With this in mind, MySQL 5.5 introduced an open audit plugin API that enables all MySQL users to write their own auditing plugins based on application specific requirements. While the supporting docs are very complete and provide working code samples, writing an audit plugin requires time and low-level expertise to develop, test, implement and maintain. To help those who don't have the time and/or expertise to develop such a plugin, Oracle now ships MySQL 5.5.28 and higher with an easy to use, out-of-the-box auditing solution; MySQL Enterprise Audit. MySQL Enterprise Audit The premise behind MySQL Enterprise Audit is simple; we wanted to provide an easy to use, policy-based auditing solution that enables you to quickly and seamlessly add compliance to their MySQL applications. MySQL Enterprise Audit meets this requirement by enabling you to: 1. Easily install the needed components. Installation requires an upgrade to MySQL 5.5.28 (Enterprise edition), which can be downloaded from the My Oracle Support portal or the Oracle Software Delivery Cloud. After installation, you simply add the following to your my.cnf file to register and enable the audit plugin: [mysqld] plugin-load=audit_log.so (keep in mind the audit_log suffix is platform dependent, so .dll on Windows, etc.) or alternatively you can load the plugin at runtime: mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so'; 2. Dynamically enable and disable the audit stream for a specific MySQL server. A new global variable called audit_log_policy allows you to dynamically enable and disable audit stream logging for a specific MySQL server. The variable parameters are described below. 3. Define audit policy based on what needs to be logged (everything, logins, queries, or nothing), by server. The new audit_log_policy variable uses the following valid, descriptively named values to enable, disable audit stream logging and to filter the audit events that are logged to the audit stream: "ALL" - enable audit stream and log all events "LOGINS" - enable audit stream and log only login events "QUERIES" - enable audit stream and log only querie events "NONE" - disable audit stream 4. Manage audit log files using basic MySQL log rotation features. A new global variable, audit_log_rotate_on_size, allows you to automate the rotation and archival of audit stream log files based on size with archived log files renamed and appended with datetime stamp when a new file is opened for logging. 5. Integrate the MySQL audit stream with MySQL, Oracle tools and other third-party solutions. The MySQL audit stream is written as XML, using UFT-8 and can be easily formatted for viewing using a standard XML parser. This enables you to leverage tools from MySQL and others to view the contents. The audit stream was also developed to meet the Oracle database audit stream specification so combined Oracle/MySQL shops can import and manage MySQL audit images using the same Oracle tools they use for their Oracle databases. So assuming a successful MySQL 5.5.28 upgrade or installation, a common set up and use case scenario might look something like this: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It should be noted that MySQL Enterprise Audit was designed to be transparent at the application layer by allowing you to control the mix of log output buffering and asynchronous or synchronous disk writes to minimize the associated overhead that comes when the audit stream is enabled. The net result is that, depending on the chosen audit stream log stream options, most application users will see little to no difference in response times when the audit stream is enabled. So what are your next steps? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Get all of the grainy details on MySQL Enterprise Audit, including all of the additional configuration options from the MySQL documentation. MySQL Enterprise Edition customers can download MySQL 5.5.28 with the Audit extension for production use from the My Oracle Support portal. Everyone can download MySQL 5.5.28 with the Audit extension for evaluation from the Oracle Software Delivery Cloud. Learn more about MySQL Enterprise Edition. As always, thanks for your continued support of MySQL!

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • Populate google.visualization.DataTable for a AnnotatedTimeLine using JSON

    - by Lucifer
    Hi I have a HttpHandler which returns some JSON in (i think) the correct format for a google.visualization.DataTable, but the AnnotatedTimeLine fails to work? This is the JSON returned by the Handler: {cols: [{id: 'DATE', label: 'Date', type: 'date'}, {id: 'KEYWORD51', label: 'vw cheltenham', type: 'number'}, {id: 'KEYWORD52', label: 'volkswagen cheltenham', type: 'number'}, {id: 'KEYWORD61', label: 'vw dealer cheltenham', type: 'number'}], rows: [{c: [{v: new Date(2010, 3, 13)}, {v: 20}, {v: 1}, {v: 2}]}, {c: [{v: new Date(2010, 3, 14)}, {v: 19}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 15)}, {v: 19}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 16)}, {v: 18}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 17)}, {v: 17}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 18)}, {v: 17}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 19)}, {v: 12}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 20)}, {v: 13}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 21)}, {v: 11}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 22)}, {v: 10}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 23)}, {v: 10}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 24)}, {v: 8}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 25)}, {v: 6}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 26)}, {v: 6}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 27)}, {v: 5}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 28)}, {v: 4}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 29)}, {v: 4}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 3, 30)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 1)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 2)}, {v: 1}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 3)}, {v: 2}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 4)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 5)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 6)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 7)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 8)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 9)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 10)}, {v: 0}, {v: 0}, {v: 0}]}, {c: [{v: new Date(2010, 4, 11)}, {v: 0}, {v: 1}, {v: 1}]}, {c: [{v: new Date(2010, 4, 12)}, {v: 2}, {v: 1}, {v: 1}]}]} This is the Javascript, I used JQuery to get the JSON, have also tried $.getJSON() google.load('visualization', '1', { 'packages': ['annotatedtimeline'] }); google.setOnLoadCallback(loadGraph); function loadGraph() { $.get("/GraphDataHandler.axd", function(response) { drawGraph(response); }); } function drawGraph(response) { var visualization = new google.visualization.AnnotatedTimeLine(document.getElementById('chart_div')); var data = new google.visualization.DataTable(response, 0.6); visualization.draw(data, { title: 'Rankings', titleX: 'Date', titleY: 'Position', displayAnnotations: false, allowRedraw: true }); } But, if I write the same JSON to the page like below it works fine!? <script type="text/javascript"> //<![CDATA[ var gData = {cols: [{id: 'DATE', label: 'Date', type: 'date'}, {id: 'KEYWORD51', label: 'vw cheltenham', type: 'number'}], rows: [{c: [{v: new Date(2010, 3, 13)}, {v: 20}]}, {c: [{v: new Date(2010, 3, 14)}, {v: 19}]}, {c: [{v: new Date(2010, 3, 15)}, {v: 19}]}, {c: [{v: new Date(2010, 3, 16)}, {v: 18}]}, {c: [{v: new Date(2010, 3, 17)}, {v: 17}]}, {c: [{v: new Date(2010, 3, 18)}, {v: 17}]}, {c: [{v: new Date(2010, 3, 19)}, {v: 12}]}, {c: [{v: new Date(2010, 3, 20)}, {v: 13}]}, {c: [{v: new Date(2010, 3, 21)}, {v: 11}]}, {c: [{v: new Date(2010, 3, 22)}, {v: 10}]}, {c: [{v: new Date(2010, 3, 23)}, {v: 10}]}, {c: [{v: new Date(2010, 3, 24)}, {v: 8}]}, {c: [{v: new Date(2010, 3, 25)}, {v: 6}]}, {c: [{v: new Date(2010, 3, 26)}, {v: 6}]}, {c: [{v: new Date(2010, 3, 27)}, {v: 5}]}, {c: [{v: new Date(2010, 3, 28)}, {v: 4}]}, {c: [{v: new Date(2010, 3, 29)}, {v: 4}]}, {c: [{v: new Date(2010, 3, 30)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 1)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 2)}, {v: 1}]}, {c: [{v: new Date(2010, 4, 3)}, {v: 2}]}, {c: [{v: new Date(2010, 4, 4)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 5)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 6)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 7)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 8)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 9)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 10)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 11)}, {v: 0}]}, {c: [{v: new Date(2010, 4, 12)}, {v: 2}]}]}; //]]> </script> Please advise how I can get it to work correctly using a the JSON calls? Thanks

    Read the article

  • We have moved to larger offices

    - by Chris Houston
    First of all we should probably apologise for the complete lack of blogging over the last 6 months! As web developers we are constantly telling our clients that they should keep their blogs up to date and it seems we have been ignoring our own advice.That being said, we have been very busy moving offices and helping our new host QV Offices setup their new business. As well as all the moving we have not been sitting on our hands, we have built the new site for DairyMaster over in Ireland as well as a separate private website for their global distributor network.As Umbraco Gold Partners we have found more and more that we are working on projects where we are the silent development partners, so although we cannot talk publicly about a lot of the sites we develop, we have some real beauties now in our portfolio :)Now that the dust has settled in our new office ( and has been hovered up! ) we are read for the new year and are looking forward to working on some exciting projects that are currently in the pipeline.We are also intending to run some Hacking sessions for Umbraco as we now have lots of space for developers to come and work with us, so if you have any ideas of a theme for an Umbraco Hackathon then do let us know.And with that it just remains to say Happy Christmas to you all and see you in the new year!

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • Windows Vista Context Menu>New... does not find entries

    - by Paul
    I was trying to remove a virus and foolishly did not backup registry keys I deleted because I (thought) I only deleted entries from the folders of programs I did not care about. However, I think I have done something wrong here: Now when I open a context menu (right click) in any location and hover over the "New..." option I don't get any options. It has a greyed out box saying "(Empty)". So far I have found out the the entries themselves are still there (using the locations provided here: Windows 7 - Add an item to 'new' context menu). I have also used a program recommended in that thread which also finds the entries intact and enabled. So it seems maybe I have deleted the entry which tells Vista where to look to find the files that can be created. How can I restore this so entries are shown again? I know system restore is an option but as I have said I did this when removing a (very stubborn) virus so that is the last resort.

    Read the article

  • Samba folder is gone

    - by bioShark
    I seem to have some issues sharing folders from my Ubuntu 12.04 machine to a Win7 machine. After playing around with the settings, I decided to revert to Samba's original setting by reinstalling it: sudo apt-get purge samba sudo rm -rf /etc/samba/ /etc/default/samba sudo apt-get install samba just to be sure I also run: sudo apt-get install samba samba-common system-config-samba winbind Now, I can't find /etc/samba folder any more. Even when I try to share a folder through Nautilus, I get: Samba's testparm returned error 1: Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) params.c:OpenConfFile() - Unable to open configuration file "/etc/samba/smb.conf": No such file or directory Error loading services. Same when I try to list: xxx@xxx:~$ ll /etc/samba ls: cannot access /etc/samba: No such file or directory Any ideas what I did wrong, or what other package am I missing? cheers

    Read the article

  • What's new in Xamarin and iOS7 - webinar

    - by Wallym
    I recently did an online webinar regarding the new iOS7 and Xamarin.  In it, I covered the basics of what is new in iOS7 along with what is new in Xamarin's developer platform.  Please take some time and view this webinar.  The items that were covered include:What's new in iOS7.The XCode Design Surface.An example showing new iOS7 View Animations.What's new with Xamarin and async, await, and HttpClient.A demo of Razor Templating.The Xamarin.iOS Plugin for Visual Studio.  ** The video only works in Windows.  I don't control the content, so I have to go with what I am given. :-( **

    Read the article

  • How to copy a folder from /home/kevin to /opt

    - by lambda23
    I have a new computer installed with Ubuntu 12.04. Then I want to install wireless driver named compat-wireless-3.5-3. Before that, the driver folder to /home/kevin. I want to install it on /opt directory. Before install the driver, i want to copy the driver folder from /home/kevin to /opt. I try to use ordinary copy (Right Click Copy Paste), but the paste is blured. After that, i tried using this on terminal: sudo cp /home/kevin/compat-wireless-3.5-3 /opt But i get this command: cp: omitting directory `home/kevin/compat-wireless-3.5-3' What does the command mean? I can't copy the driver until now.

    Read the article

  • Why do we need fork to create new process

    - by user3671483
    In Unix whenever we want to create a new process, we fork the current process i.e. we create a new child process which is exactly the same as the parent process and then we do exec system call to replace the child process with a new process i.e. we replace all the data for the parent process eith that for the new process. Why do we create a copy of the parent process in the first place and why don't we create a new process directly? I am new to Unix please explain in lay-man terms.

    Read the article

  • Deleted some files from home folder

    - by narendra shah
    I have recently installed Ubuntu for the first time in my life. So I am fairly new to it. I have deleted some files from my home folder. But now the problems have started. The system volume automatically reduces to zero. Further, as soon as I restarted my system, my panel settings we restored to default. When I right click in my home folder, it gives an option of 'restore missing files' but I am not able to restore them. Please guide me how to restore them. Thanks Narendra

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Set up iis7.5 to deny connections outside of LAN for certain folder [migrated]

    - by Darkcat Studios
    Im setting up a combined website and extranet currently, they both read from the same database on the same server as the site is hosted on. The reason being that the website is fed from the data that the staff plug into the extranet interface. it also links in to AD for authorising access to the extranet. I have the extranet in a folder within the website folder. What I want to do is only allow the extranet to be accessed from computers within our LAN, but allow the main website to be freely accessible to internet users. I have it set up as a generic web server currently, so anyone can view anything (well up to the point where the user is asked to log into the extranet of course! I have read a lot on this but nothing I read applies to, or works in IIS7.5

    Read the article

  • Cleaning up a folder structure from Visual Studio artifacts from the shell

    A few years ago I wrote a post that showed how to write a NAnt script to clean a folder structure from the artifacts folders used by Visual Studio. Today what I wanted to show you is a way that doesnt require NAnt installed on your computer, but that uses just a very simple command for Windows shell. Actually, its just a very tiny variation of the same command that Jon Galloway wrote to clean a folder structure from SVN files. But without further ado, here it is: Windows Registry Editor...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Organizing Connections with Folders in Oracle SQL Developer

    - by thatjeffsmith
    How many Oracle databases do you work with on a regular basis? I’m guessing the answer for most of you lies between 1 and 500. This post is really geared for those of you who deal with more than just a handful (5) of database connections. Filters are nice when you need to work with a subset of table data, or even a list of tables. So why wouldn’t they be just as useful for organizing your connections? Here’s my complete list of databases: The folders aren’t there by default, you add them as you need them. Now this isn’t an overly large connection list. But when I need to fire up an impromptu demo for a customer, it’s very nice to be able to drill down into JUST those ‘safe’ environments. This actually saves me a few seconds every time I need to connect to one of my databases. So while it’s a very simple feature, it’s one of those things that I recommend EVERYONE take advantage of as it will save them hours of time over the long haul. Easier to find means I get to work a few seconds faster. This also helps me from making mistakes in ‘production’ environments! How to Add a Connection Folder Select a connection you want to organize. Mouse-right-click, and choose ‘Add to folder.’ You can throw it into a new container or an existing one. Lather, rinse, and repeat as necessary. The only trick is remembering to right-click! Special thanks to @dresendi for today’s topic! He asked how to do this and I realized I hadn’t blogged the topic yet

    Read the article

  • VMWare Fusion Folder Sharing Not Working with Server

    - by Dave Long
    I have a Ubuntu Server running in VMWare Fusion 3.1.2 on my MacBook Pro for Java development and all my projects sit on my Mac in ~/Workspace/ColdFusion. I had ColdFusion/ shared with my VM through the VMWare tools, and it was working perfectly up until friday when the folder sharing just stopped. No updates on either mac or linux besides an iTunes update. I tried uninstalling the VMWare tools and reinstalling them but I get an error at the end of the install. It appears that when I reinstall the tools there are files left over from the old installation. Is there a way to force the unsinstall script to completely uninstall and remove all files for the VMWare-Tools? I know the shared folder used to mount at /mnt/hgfs/ColdFusion.

    Read the article

  • Unable to sync files not located in the Ubuntu One folder

    - by pst007x
    I have the option to sync files in nautilus, however when the option is selected nothing happens. I am able to SLOWLY sync files in the Ubuntu One folder (unless in sub-folders), but only there. Why give the option if it does not work? This really is still beta, so to charge for this service is terrible. I am a little frustrated at the moment, I have tried all sorts; removing accounts to a fresh Ubuntu install! I am at a loss :-( Can anyone else sync files outside the Ubuntu One folder, if so how? Thanks

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • How to Export Email "Sent" Folder?

    - by user249493
    A client had her web site and email hosted at "company A". She was switching to "company B" but didn't want to lose her email. I set up a Gmail account, POPed into her webmail account, and pulled the entire inbox into Gmail (for later transfer to her new host). But I forgot about the "sent" folder. Although the hosting plan is still up and running, she changed her domain record to point to the new host. So I can't access the old webmail account via POP or IMAP because the email address needed for authentication now resolves to the new host. Is there any way I can get the contents of the sent folder without having to do a "forward" one message at a time (there are hundreds)?

    Read the article

  • Launch Windows Explorer From Current Command Prompt Folder

    - by Gopinath
    While using Command prompt did you ever felt like accessing the files of current folder using Windows Explorer? Here is a simple command that launches Windows Explorer and opens up current folder content         explorer .   This tip is very handy for all the command prompt lovers to quickly return to Windows Explorer and perform some mouse based operations. via how to geek Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How to encrypt php folder under /var/www?

    - by sirchaos
    I need to encrypt the folder /var/www/test. The folder contains PHP files. The goal it to prevent any user to read the php content AND if the HD is mounted on another computer, the /var/www/test should be encrypted AND if computer booted up without any user logged I would like anyone to be able to access data in /var/www/tests. What is the correct approach for this? I've tried "ecryptfs-setup-private" as advised in How to encrypt /var/www? yet it didn't work for me. I've might missed something - I've tested the folders while booting with ubuntu 12.04 installation disk and mounted the drive, than I was able to access /var/www/test content.. yet this is what I want to prevent. The gnome-encfs isn't the way to go since its decryption happens when users logs on to the system & I would like the system to be working after power failure etc' without any one logged in. Please advice.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >