Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 630/2338 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • Code execution time out occationally

    - by Athul k Surendran
    I am working on an e-commerce website. There is a case where I need to fetch the whole data in database through third-party API and send it to an indexing engine. This third-party API has many functions like getproducts, getproductprice, etc., and each of that functions will return the data in XML format. From there I will take charge, I will use various API calls and will handle the XML data with XSLT. And will write to a CSV file. This file will be uploaded to an Indexing engine. Right now I have details of 8000 products to feed the engine, and almost all time the this process takes about 15 min to complete, and sometimes fails. I can't find a better solution for this. I am thinking about handling the XML data in C# itself and get rid of XSLT. As I think, XSLT is far slower than C#. Is it a good Idea? Or what else I can do to solve this issue?

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • ArchBeat Link-o-Rama for October 23, 2013

    - by OTN ArchBeat
    Virtual Dev Day: Oracle ADF Development - Web, Mobile, and Beyond This free virtual event includes technical sessions that range from introductory to deep dive, covering Oracle ADF and Oracle ADF Mobile. Multiple tracks cover every interest and every level and include live online Q&A for answers to your technical questions. Register now! Americas: Tuesday, November 19, 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT APAC: Thursday, November 21, 10am–1:30pm IST (India) / 12:30pm–4pm SGT (Singapore) / 3:30pm–7pm AESDT EMEA: Tuesday, November 26, 9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Java community manager Tori Wieldt interviews a robot and human. Nordic OTN Tour 2013 | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans checks in from the Stockholm leg of the Nordic OTN Tour for 2013, sponsored by the Danish Oracle User Group and featuring fellow ACE Directors Tim Hall and Sten Vesterli, plus local speakers at various stops. Lonneke's post include the slides from three of the presentations. Thought for the Day "Some people approach every problem with an open mouth." — Adlai E. Stevenson23rd Vice President of the United States (October 23, 1835 – June 14, 1914) Source: brainyquote.com

    Read the article

  • Clouds Aroud the World

    - by user12608550
    At the NIST Cloud Computing Workshop this week; representatives from Canada, China, and Japan presented on their cloud computing efforts. Some interesting points made: Canada: Building "Service Canada" cloud for all citizen services, but raised the issue of data location...cloud data must be within Canada border, so they will not focus on public clouds where they don't know or can't control data location. Japan: In response to the massive destruction of the Great East Japan Earthquake, Japan is building nation-wide cloud services to support disaster relief, data recovery, and support for rebuilding new communities. US Ambassador Philip Verveer discussed the need for international cooperation and standards development to enable interoperability of cloud services, keeping in mind cultural and political differences. Additionally, an industry panel reported on cloud standards development, including some actual interoperability testing at http://www.cloudplugfest.org. Much of the first two days of the workshop covered progress and action plans around the 10 High-Priority Requirements to Further USG Agency Cloud Computing Adoption. Thursday's sessions will cover the work of the various NIST Cloud Computing Working Groups on Reference Architecture and Taxonomy Standards Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC) Cloud Security Standards Roadmap Business Use Cases (see Working Groups of NIST Cloud Computing )

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • 12.10 install overwrote my windows partition

    - by Niall C
    Recently decided to switch back to Ubuntu. I have a 3TB drive which was running win7. I had 3 partitions. c: for windows d: data e: data Have installed ubuntu before so 'thought' I knew what I was doing. I using netbootin I installed from a usb stick. I didn't choose the default options but I didn't choose the 'manual install' either. I can't remember what option I took but I figured at some stage it would tell me how it was going to partition the disk and at that stage I would see if it had recognised the NTFS partitions and I would be able to abort if it didn't. Unfortunately, it didn't and just went ahead and installed Ubuntu and made up it's own mind on how it was going to partition the disk. Usual story, the two NTFS data partitions weren't backed up. Is there anything I can do to retrieve the ntfs data? I'm currently trying out testdisk and I know I can use photodisk to retrieve certain file types but all the filenames will be lost and it's going to take a hell of a lot of time to rename them all. Any help or assistance would be more than gratefully accepted. Thanks in advance, Niall

    Read the article

  • How do I write to an outer truecrypt volume when the inner volume protection prevents writng?

    - by con-f-use
    In a nutshell After some time using the outer volume of a hidden volume in Truecrypt I cannot write to the outer volume anymore. The protection of the inner volume always kicks in before. How do I fix this? Details I'm using truecrypt's two layered encryption of a USB stick. The outer container carries my semi-sensitive stuff while the inner hidden values has a bit more valuable information. I use both, the inner and outer volume regularly and that is part of the problem. Truecrypt can mount the outer volume for writing while protecting the inner. Usually the inner volume, when not protected this way (or mounted read-only) would be indistinguishable from free space. That is of course part of the plausible deniability scheme of truecrypt. At the beginning, everything worked as expected. I could copy and delete data to the outer volume as I pleased. Now it seams that I have written and deleted enough data to have filled the outer volume once. Despite the write protection Ubuntu tries now to write to the continuous "free space" that is the inner volume. It does that although enough other free space is on the outer volume. But on this free space there used to be data so its fragmented and the file system write prefers continuous space. The write on the continuous free space of the outer volume of course fails (with the error message in the picture above) as Truecrypt's inner-volume-protection kicks in. The Question I know this is expected behaviour, but is there a better way to write to the outer volume that does not attempt to write to the hidden free space at the end? The whole question could be more generally rephrased to: How do I control, where on a partition data is written in Ubuntu?

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Turn a Kindle into a Weather Display Station

    - by Jason Fitzpatrick
    The e-ink display, network connectivity, and low-power consumption of Kindle ebook readers make them a perfect candidate for an infrequently refreshed high-visibility display–like a weather display. Read on to see how to hack a Kindle to serve up the local weather. Tinker and hardware hacker Matt Petroff hacked his Kindle to accept input from a web server and then, graciously and in the spirit of geeky projects everywhere, shared his source code. He explains the heart of the project: The server side of the system uses shell and Python scripts to convert weather forecast data into an image for the Kindle. The scripts first download and parse forecast data from NOAA via the National Digital Forecast Database XML/SOAP Service. After parsing the data, the data then needs to be converted into an image. This is accomplished by preprocessing a specially crafted SVG file to insert temperatures, forecast symbols, and days of the week. This SVG is then rendered as a PNG using rsvg-convert and converted to a grayscale, no transparency color space as required by the Kindle using pngcrush. Finally, it is copied to a public location on the web server. The Kindle is set to refresh twice a day (you could easily tweak the scripts for a more frequent refresh) and displays the forecast as seen in the photo above–with crisp and easy to read text and icons. Hit up the link below for more information and the project’s source code. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • TechEd 2012: Fast SQL Server

    - by Tim Murphy
    While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter.  Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn. Brent Ozar packed a mountain of information into his session on making SQL Server faster.  I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it.  I also questioned his sanity since he appeared to be using a fruit laptop. He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory.  He continued by cautioning that having too many indexes could defeat this approach.  His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance.  He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions. Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here.  To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU.  It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily.  Just what the doctor ordered. del.icio.us Tags: SQL Server,TechEd,TechEd 2012,Database,Performance Tuning

    Read the article

  • Take a snapshot with JavaFX!

    - by user12610255
    JavaFX 2.2 has a "snapshot" feature that enables you to take a picture of any node or scene. Take a look at the API Documentation and you will find new snapshot methods in the javafx.scene.Scene class. The most basic version has the following signature: public WritableImage snapshot(WritableImage image) The WritableImage class (also introduced in JavaFX 2.2) lives in the javafx.scene.image package, and represents a custom graphical image that is constructed from pixels supplied by the application. In fact, there are 5 new classes in javafx.scene.image: PixelFormat: Defines the layout of data for a pixel of a given format. WritablePixelFormat: Represents a pixel format that can store full colors and so can be used as a destination format to write pixel data from an arbitrary image. PixelReader: Defines methods for retrieving the pixel data from an Image or other surface containing pixels. PixelWriter: Defines methods for writing the pixel data of a WritableImage or other surface containing writable pixels. WritableImage: Represents a custom graphical image that is constructed from pixels supplied by the application, and possibly from PixelReader objects from any number of sources, including images read from a file or URL. The API documentation contains lots of information, so go investigate and have fun with these useful new classes! -- Scott Hommel

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • Project Showcase: SaaS Web Apps Hits a Home Run with New SCMS Database

    - by Webgui
    We love seeing projects from start to finish, and we’re happy to share the latest example with you. Who: SaaS Web Apps – they use Software as a Service to create web applications that look and feel like desktop applications. What: SaaS Web Apps needed to build a Sports Contract Management System (SCMS) for one of its customers, Premier Stinson Sports. Why: The SCMS database is used for collecting, analyzing and recording college coach and athletic directors’ employment and contract data. The Challenge: Premier Stinson Sports works with a number of partners, each with its own needs and unique requirements. For example, USA Today uses the system to provide cutting edge news analysis while The National Sports Law Institute of Marquette University Law School uses it to for the latest sports contract data and student analysis. In addition, the system needed to be secure due to the sensitivity of the data; it was essential that the user security and permissions be easily configurable. As always, performance was a key factor, especially with the intense reporting and analytical capabilities for this project. Because of this, most of the processing had to be done on a dedicated server but the project called for the richness and responsiveness of a desktop application. The Solution: To execute the project, SaaS Web Apps used APS.Net-based Visual WebGui from Gizmox, combined with SQL Server 2008 and SQL Reporting Services. This combination resulted in a quick deployment for SaaS Web Apps’ customers. The Result: The completed project gave each partner the scalability and availability of a web application with the performance and security of a desktop application. As an example, USA Today pulls data from this database to give readers the latest sports stats – Salary analysis of 2010 Football Bowl Subdivision Coaches. And here’s a screenshot of the database itself. Great work, SaaS Web Apps!

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • How to populate a private container for unit test?

    - by Sardathrion
    I have a class that defines a private (well, __container to be exact since it is python) container. I am using the information within said container as part of the logic of what the class does and have the ability to add/delete the elements of said container. For unit tests, I need to populate this container with some data. That date depends on the test done and thus putting it all in setUp() would be impractical and bloated -- plus it could add unwanted side effects. Since the data is private, I can only add things via the public interface of the object. This run codes that need not be run during a unit test and in some case is just a copy and paste from another test. Currently, I am mocking the whole container but somehow it does not feel that elegant a solution. Due to Python mocking frame work (mock), this requires the container to be public -- so I can use patch.dict(). I would rather keep that data private. What pattern can one use to still populate the containers without excising the public method so I have data to test with? Is there a way to do this with mock' patch.dict() that I missed?

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • Database Insider - December 2012 issue

    - by Javier Puerta
    The December issue of the Database Insider newsletter is now available. (Full newsletter here) Big Data: From Acquisition to Analysis 2012 will likely be remembered as the year of big data, as a new generation of technologies enables organizations to acquire, organize, and analyze the exponentially growing and typically less-structured data generated from a variety of new sources. Oracle has produced a series of five short videos that offer a quick and compelling high-level introduction to big data. Read More Total Cost of Ownership Comparison: Oracle Exadata vs. IBM P-Series Read the research that found that over three years, the IBM hardware running Oracle Database cost 31 percent more in total cost of ownership than Oracle Exadata. Webcast - Oracle Exadata Database Machine X3 Learn about Oracle’s next-generation database machine, Oracle Exadata X3, that combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow.    Maximum Availability with Oracle GoldenGate Discover how to eliminate not only unplanned downtime but also planned downtime resulting from database upgrades, migrations, and consolidation.Thursday, December 1319:00 CET / 6 pm. UK   

    Read the article

  • Creating dynamic plots

    - by geoff92
    I'm completely new to web programming but I do have a programming background. I'd like to create a site that allows users to visit, enter some specifications into a form, submit the form, and then receive a graph. I have a few questions about this, only because I'm pretty ignorant: Is there a good framework I should start in? I know a lot of java, I'm okay with python, and I learned Ruby in the past. I figure I might use ruby on rails only because I hear of it so often and I think I've also heard it's easy. If anyone has some other recommendation, please suggest. The user will be entering data into a form. I'm guessing the request they'll be making should be one of a GET request, right? Because I don't intend for any of the data they're entering to modify my server (in fact, I don't intend on having a database). The data the user inputs will be used to perform calculations involving lots of matrices. I've written this functionality in python. If I use ruby on rails, should it be instead written in Ruby? Somewhere I've heard that you can either place the load of the work on your server or on the client's computer. Since the code performs heavy math, which option is preferable? How do I alter the setup to either make the client do the work or my server? Should I be using a "cgi-bin"? In the code that I have now, I use matplotlib, a python library, and then "show" the plot in order to see the graph. I specify the x and y limits, but I am able to "drag" the graph in order to see more data within the plot window. Ultimately, I want a graph to be shown on my site with the drag functionality. Is this possible? What if the client drags the graph so far, more computations must be made? Thanks!

    Read the article

  • Customer Highlight: NTT DOCOMO

    - by jeckels
    NTT DOCOMO is the largest mobile operator in Japan, and serves over 13 million smartphone customers. Due to their growing data processing and scalability needs, they turned to Oracle's Cloud Application Foundation products for an integral soultion. At Oracle OpenWorld 2012, we first showcased NTT DOCOMO as a customer who was utilizing Oracle Coherence to process mobile data at a rate of 700,000 events per second (and then using Hadoop for distributed processing of big data). Overall, this Led to a 50% cost reduction due to the ultra-high velocity traffic processing of their customers' events. Recently, on October 7th, 2013, Oracle and NTT DOCOMO were proud to again announce a partnership around another key component of Oracle CAF: WebLogic Server. WebLogic was recently deployed as the application platform of choice to run DOCOMO's mission-critical data system ALADIN, which connects nationwide shops and information centers. ALADIN, which also utilizes Oracle Database and Oracle Tuxedo, is based on Java Platform, Enterprise Edition (Java EE), which has allowed the company to operate smoothly while minimizing additional development and modification associated with the migration of application server products. We look forward to continuing to partner with NTT DOCOMO, and are proud that Oracle Cloud Application Foundation products are providing the mission-critical solutions - at scale - that DOCOMO requires. Want to learn more about how CAF products are working in the real world? Join us for a FREE Virtual Developer Day on November 5th from 9am-1pm Pacific Time!REGISTER NOW

    Read the article

  • How to catch a HTTP 404 in Flash

    - by Quandary
    When I execute the (2nd) below code with a wrong url (number '1' added at the URL end), I get the below error. How can I catch this error, in case the url is wrong, so that I can give out an error message to the user ? Error opening URL 'http://localhost/myapp/cgi-bin/savePlanScale.ashx1?NoCache%5FRaumplaner=F7CF6A1E%2D7700%2D8E33%2D4B18%2D004114DEB39F&ScaleString=5%2E3&ModifyFile=data%2Fdata%5Fzimmere03e1e83%2D94aa%2D488b%2D9323%2Dd4c2e8195571%2Exml' httpStatusHandler: [HTTPStatusEvent type="httpStatus" bubbles=false cancelable=false eventPhase=2 status=404] status: 404 Error: Error #2101: Der an URLVariables.decode() übergebene String muss ein URL-kodierter Abfrage-String mit Name/Wert-Paaren sein. at Error$/throwError() at flash.net::URLVariables/decode() at flash.net::URLVariables() at flash.net::URLLoader/onComplete() public static function NotifyASPXofNewScale(nScale:Number) { var strURL:String ="http://localhost/myapp/cgi-bin/savePlanScale.ashx1" // CAUTION: when called from website, RELATIVE url... var scriptRequest:URLRequest = new URLRequest(strURL); var scriptLoader:URLLoader = new URLLoader(); // loader.dataFormat = URLLoaderDataFormat.TEXT; // default, returns as string scriptLoader.dataFormat = URLLoaderDataFormat.VARIABLES; // returns URL variables // loader.dataFormat = URLLoaderDataFormat.BINARY; // to load in images, xml files, and swf instead of the normal methods var scriptVars:URLVariables = new URLVariables(); scriptLoader.addEventListener(Event.COMPLETE, onLoadSuccessful); scriptLoader.addEventListener(IOErrorEvent.IO_ERROR, onLoadError); scriptLoader.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler); scriptLoader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, onSecurityError); scriptVars.NoCache_Raumplaner = cGUID.create(); scriptVars.ScaleString = nScale; scriptVars.ModifyFile = "data/data_zimmere03e1e83-94aa-488b-9323-d4c2e8195571.xml"; scriptRequest.method = URLRequestMethod.GET; scriptRequest.data = scriptVars; scriptLoader.load(scriptRequest); function httpStatusHandler(event:HTTPStatusEvent):void { trace("httpStatusHandler: " + event); trace("status: " + event.status); } function onLoadSuccessful(evt:Event):void { trace("cSaveData.NotifyASPXofNewScale.onLoadSuccessful"); trace("Response: " + evt.target.data); ExternalInterface.call("alert", "Die neue Skalierung wurde erfolgreich gespeichert."); //getURL("javascript:alert(\""+"Die neue Skalierung wurde erfolgreich gespeichert.\\nALLE Instanzen des Browsers schliessen und neu starten, damit die Änderung in Kraft tritt."+"\");"); if (evt.target.data.responseStatus == "YOUR FAULT") { trace("Error: Flash transmitted an illegal scale value."); ExternalInterface.call("alert", "Fehler: Flash konnte die neue Skalierung nicht abspeichern."); } if (evt.target.data.responseStatus == "EXCEPTION") { trace("Exception in ASP.NET: " + evt.target.data.strError); ExternalInterface.call("alert", "Exception in ASP.NET: " + evt.target.data.strError); } } function onLoadError(evt:IOErrorEvent):void { trace("cSaveData.NotifyASPXofNewScale.onLoadError"); trace("Error: ASPX or Transmission error. ASPX responseStatus: " + evt); ExternalInterface.call("alert", "ASPX - oder Übertragungsfehler.\\nASPX responseStatus: " + evt); //getURL("javascript:alert(\"" + "ASPX - oder Übertragungsfehler.\\nASPX responseStatus: " + receiveVars.responseStatus + "\");"); } function onSecurityError(evt:SecurityErrorEvent):void { trace("cSaveData.NotifyASPXofNewScale.onSecurityError"); trace("Security error: " + evt); ExternalInterface.call("alert", "Sicherheitsfehler. Beschreibung: " + evt); } }

    Read the article

  • A little bit of Ajax goes a long way

    - by Holland
    ..except when you're having problems. My problem is this: I have a hierarchical list of categories stored in a database which I wish to output in a dropdown list. The hierarchy comes into place when the subcategories are to be displayed, which are dependent on a parent id (which equals out to the first seven or so main categories listed). My thoughts are relatively simple: when the user clicks the dynamically allocated list of main categories, they are clicking on an option tag. For each option tag, an id (i.e., the parent) is listed in the value attribute, as well as an argument which is sent to a Javascript function which then uses AJAX to get the data via PHP and sends it to my 'javascript.php' file. The file then does magic, and populates the subcategory list, which is dependent on the main category selected. I believe I have the idea down, it's just that I'm implementing the solution improperly, for some reason. Here's what I have so far: from javascript.php <script type="text/javascript" src=<?=JPATH_BASE.DS.'includes'.DS.'jquery.js';?>> var ajax = { ajax.sendAjaxData = function(method, url, dataTitle, ajaxData) { $.ajax({ type: 'post', url: "C:/wamp/www/patention/components/com_adsmanagar/views/edit/tmpl/javascript.php", data: { 'data' : ajaxData }, success: function(result){ // we have the response alert("Your request was successful." + result); }, error: function(e){ alert('Error: ' + e); } }); } ajax.printSubCategoriesOnClick = function(parent) { alert("hello world!"); ajax.sendAjaxData('post', 'javascript.php', 'data' parent); <?php $categories = $this->formData->getCategories($_POST['data']); ?> ajax.printSubCategories(<?=$categories;?>); } ajax.printSubCategories = function(categories) { var select = document.getElementById("subcategories"); for (var i = 0; i < categories.length; i++) { var opt = document.createElement("option"); opt.text = categories['name']; opt.value = categories['id']; } } } </script> the function used to populate the form data function populateCategories($parent, FormData $formData) { $categories = $formData->getCategories($parent); echo "<pre>"; print_r($categories); echo "</pre>"; foreach($categories as $section => $row){ ?> <option value=<?=$row['id'];?> onclick="ajax.printSubCategoriesOnClick(<?=$row['id']?>)"> <? echo $row['name']; ?> </option> <?php } } The problem is that when I try to do a print_r on my $_POST variable, nothing shows up. I also receive an "undefined index" error (which refers to the key I set for the data type). I'm thinking that for some reason, the data is being sent to my form.php file, or my default.php file which includes both the form.php and javascript.php files via a function. Is there something specific that I'm missing here? Just looking up basic AJAX syntax (via jQuery) hasn't helped out, unfortunately.

    Read the article

  • MySQL Query GROUP_CONCAT Over Multiple Rows

    - by PeteGO
    I'm getting name and address data out of generic question / answer data to create some kind of normalised reporting database. The query I've got uses group_concat and works for individual sets of questions but not for multiple sets. I've tried to simplify what I'm doing by using just forename and surname and just 3 records, 2 for 1 person and 1 for another. In reality though there are more than 300,000 records. Example of results with qs.Id = 1. QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones Example of results with qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 3 Bob,Bob,Frank Jones,Jones,Smith What I would like to see for qs.Id IN (1, 2, 3). QuestionSetId Forename Surname ------------------------------------------------------- 1 Bob Jones 2 Bob Jones 3 Frank Smith So how can I make the 2nd example return a separate row for each set of name and address information? I realise the current way the data is stored is "questionable" but I cannot change the way the data is stored. I can get sets of individual answers but not sure how to combine the others. My simplified Schema that I cannot change: CREATE TABLE StaticQuestion ( Id INT NOT NULL, StaticText VARCHAR(500) NOT NULL); CREATE TABLE Question ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE StaticQuestionQuestionLink ( Id INT NOT NULL, StaticQuestionId INT NOT NULL, QuestionId INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE Answer ( Id INT NOT NULL, Text VARCHAR(500) NOT NULL); CREATE TABLE QuestionSet ( Id INT NOT NULL, DateEffective DATETIME NOT NULL); CREATE TABLE QuestionAnswerLink ( Id INT NOT NULL, QuestionSetId INT NOT NULL, QuestionId INT NOT NULL, AnswerId INT NOT NULL, StaticQuestionId INT NOT NULL); Some example data for only forename and surname. INSERT INTO StaticQuestion (Id, StaticText) VALUES (1, 'FirstName'), (2, 'LastName'); INSERT INTO Question (Id, Text) VALUES (1, 'What is your first name?'), (2, 'What is your forename?'), (3, 'What is your Surname?'); INSERT INTO StaticQuestionQuestionLink (Id, StaticQuestionId, QuestionId, DateEffective) VALUES (1, 1, 1, '2001-01-01'), (2, 1, 2, '2008-08-08'), (3, 2, 3, '2001-01-01'); INSERT INTO Answer (Id, Text) VALUES (1, 'Bob'), (2, 'Jones'), (3, 'Bob'), (4, 'Jones'), (5, 'Frank'), (6, 'Smith'); INSERT INTO QuestionSet (Id, DateEffective) VALUES (1, '2002-03-25'), (2, '2009-05-05'), (3, '2009-08-06'); INSERT INTO QuestionAnswerLink (Id, QuestionSetId, QuestionId, AnswerId, StaticQuestionId) VALUES (1, 1, 1, 1, 1), (2, 1, 3, 2, 2), (3, 2, 2, 3, 1), (4, 2, 3, 4, 2), (5, 3, 2, 5, 1), (6, 3, 3, 6, 2); Just in case SQLFiddle is down here are the 3 queries from the examples I've linked to: 1: - working query but only on 1 set of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1)) x) y 2: - working query but undesired results on multiple sets of data. SELECT MAX(QuestionSetId) AS QuestionSetId, GROUP_CONCAT(Forename) AS Forename, GROUP_CONCAT(Surname) AS Surname FROM (SELECT x.QuestionSetId, CASE x.StaticQuestionId WHEN 1 THEN Text END AS Forename, CASE x.StaticQuestionId WHEN 2 THEN Text END AS Surname FROM (SELECT (SELECT link.StaticQuestionId FROM StaticQuestionQuestionLink link WHERE link.Id = qa.QuestionId AND link.DateEffective <= qs.DateEffective AND link.StaticQuestionId IN (1, 2) ORDER BY link.DateEffective DESC LIMIT 1) AS StaticQuestionId, a.Text, qa.QuestionSetId FROM QuestionSet qs INNER JOIN QuestionAnswerLink qa ON qs.Id = qa.QuestionSetId INNER JOIN Answer a ON qa.AnswerId = a.Id WHERE qs.Id IN (1, 2, 3)) x) y 3: - working query on multiple sets of data only on 1 field (answer) though. SELECT qs.Id AS QuestionSet, a.Text AS Answer FROM QuestionSet qs INNER JOIN QuestionAnswerLink qalink ON qs.Id = qalink.QuestionSetId INNER JOIN StaticQuestionQuestionLink sqqlink ON qalink.QuestionId = sqqlink.QuestionId INNER JOIN Answer a ON qalink.AnswerId = a.Id WHERE sqqlink.StaticQuestionId = 1 /* FirstName */ AND sqqlink.DateEffective = (SELECT DateEffective FROM StaticQuestionQuestionLink WHERE StaticQuestionId = 1 AND DateEffective <= qs.DateEffective ORDER BY DateEffective DESC LIMIT 1)

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >