Search Results

Search found 23901 results on 957 pages for 'mysql stored procedure'.

Page 593/957 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Combiner le chargement d'une base de données et d'un fichier CSV sur Java Swing en 5 minutes, par Thierry Leriche-Dessirier

    Bonjour à tous, Je vous propose un petit article, intitulé "Charger et afficher des données de la base et d'un fichier CSV simple en 5 minutes" et disponible à l'adresse http://thierry-leriche-dessirier.dev...-db-csv-5-min/ Synopsis : Ce petit article montre (par l'exemple) comment charger des données depuis un fichier CSV simple (avec Open-CSV) et depuis la base MySql (avec JDBC), en fusionnant les valeurs pour les afficher dans une Interface (Swing) sous forme de tableau (JTable et Table model) et sous forme de graphes, le tout en quelques minutes seulement. Retrouvez aussi les autres articles de la séri...

    Read the article

  • Partitioning tutorial - new features in Oracle Database 12c

    - by KLaker
    For data warehousing projects Oracle Partitioning really is a must-have feature because it delivers so many important benefits such as: Dramatically improves query performance and speeds up database maintenance operations Lowers costs by enabling a tiered storage approach that allows data to be stored on the most cost-effective storage for better resource utilisation Combined with Oracle Advanced Compression, it provides an automated approach to information lifecycle management using a simple, efficient, yet powerful way to manage data growth and reduce complexity and costs To help you get the most from partitioning we have released a new tutorial that covers the 12c new features. Topics include how to: Use Interval Reference Partitioning Perform Cascading TRUNCATE and EXCHANGE Operations Move Partitions Online Maintain Multiple Partitions Maintain Global Indexes Asynchronously Use Partial Indexes For more information about this tutorial follow this link to the Oracle Learning Library: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:8408,2 where you can begin your tutorial right now! For more information about Oracle Partitioning visit our home page on OTN: http://www.oracle.com/technetwork/database/bi-datawarehousing/dbbi-tech-info-part-100980.html

    Read the article

  • How to Estimate Needed Bandwidth for New Web Application?

    - by Noah Goodrich
    I am working on a brand new SaaS web application and need to estimate the initial bandwidth usage. Since the site doesn't exist yet, and since this is my first endeavor of this sort, I'm not really sure how much bandwidth to estimate to begin with. We will be using Linux, Apache, PHP and Mysql. The content will be generated dynamically. There will be images as part of the site design but user's will also upload images that will be displayed and documents that will be stored for download at later times. We'd like to be able to support 500,000 page loads per month with estimated image loads being about two to three times that.

    Read the article

  • Speed up lighting in deferred shading

    - by kochol
    I implemented a simple deferred shading renderer. I use 3 G-Buffer for storing position (R32F), normal (G16R16F) and albedo (ARGB8). I use sphere map algorithm to store normals in world space. Currently I use inverse of view * projection matrix to calculate the position of each pixel from stored depth value. First I want to avoid per pixel matrix multiplication for calculating the position. Is there another way to store and calculate position in G-Buffer without the need of matrix multiplication Store the normal in view space Every lighting in my engine is in world space and I want do the lighting in view space to speed up my lighting pass. I want an optimized lighting pass for my deferred engine.

    Read the article

  • Why does my CD Tray keep popping open?

    - by Anton
    I have Ubuntu server - Ubuntu 10.04.3 LTS and I can't figure out why my CD tray is opening all the time. I have looked into /var/log/auth.log, cron list and found nothing. The eject command closes tray and then it gets opened again. The server has a LAMP (Linux-Apache-MySQL-PHP) setup and I can't afford to restart it now. How can I find out who or which program is popping the tray open? Which programs can cause this behavior?

    Read the article

  • Confused about career options in Web Developement.

    - by Radheshyam Nayak
    I am currently in the final year of my graduation in computer science course. I love programming in PHP but not under pressure. As my graduation life is going to be over I have to shape up my career. My personal desire is to become a web developer and start my own web-based company after completion of courses. I do not have any desire to work for a company as a developer. Currently I have programming knowledge of PHP, Mysql and Javascript. Though I have not completed any type of project in PHP. So to become a complete web developer what else do I need to know to be able to get developement project? Any project I apply for are simply declined due to lack of portfolio. So how should I proceed?

    Read the article

  • A plan to study ASP.NET + C# + SQL + SQL Server [closed]

    - by ali saleem
    Possible Duplicates: Should I be a professional in C# programming in order to build good web applications using ASP.NET? Is there a combination of language and database that is both great to use and free/cheap? C# for web development? or C# as general purpose programming? ASP.NET MVC book for absolute beginners Will it cost me a lot if I chose ASP.NET and IIS? Is it possible to use MySQL in ASP.NET? Best books to start with ASP.NET MVC / C# and Visual Studio Is it enough for me to learn the above technologies to become a professional web developer? If so then how can I learn them? together or to start with C# for example at first? If there is another thing I should learn please tell me about it.

    Read the article

  • Multiplayer Game Listen Servers: Ensuring Integrity

    - by Ankit Soni
    I'm making a simple multiplayer game of Tic Tac Toe in Python using Bridge (its an RPC service built over a message queue - RabbitMQ) and I'd like to structure it so that the client and the server are just one file. When a user runs the game, he is offered a choice to either create a game or join an existing game. So when a user creates a game, the program will create the game and also join him as a player to the game. This is basically a listen server (as opposed to a dedicated server) - a familiar concept in multiplayer games. I came across a really interesting question while trying to make this - how can I ensure that the player hosting the game doesn't tamper with it (or atleast make it difficult)? The player hosting the game has access to the array used to store the board etc., and these must be stored in the process' virtual memory, so it seems like this is impossible. On the other hand, many multiplayer games use this model for LAN games.

    Read the article

  • Google I/O 2010 - Data migration in App Engine

    Google I/O 2010 - Data migration in App Engine Google I/O 2010 - Data migration in App Engine App Engine 201 Matthew Blain Learn about the App Engine bulk loader and see an example of migrating data from an external data source into the app engine datastore--and back out. Do you have data stored in a traditional, relational DB which you'd like to upload to App Engine? This session will teach you how. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 44:26 More in Science & Technology

    Read the article

  • Developing a live video-streaming website

    - by cawecoy
    I'm a computer science student and know a little about some technology to start developing my website, like PHP, RubyOnRails and Python, and MySQL and PostgreSQL for Database. I need to know what are the best (secure, stable, low-price, etc) to get started, based on my business information: My website will be a live video-streaming one, similar to livestream.com We need to provide a secure service for our customers. They need to have a page to create and configure their own Live-Streaming-Videos, get statistics, etc. We work with Wowza Media Server ruuning on an Apache Server In addition, I would like to know some good practices for this kind of website development, as I am new to this. Thanks in advance!

    Read the article

  • how to make audio and video streaming servers work?

    - by explorex
    I am a php mysql developer ... just an (below) average. and i am interested in the way television and radio are broadcasted over internet live. i want to know how it works and and what are its requirements (which package of which programming language offers the best). i must admit that i am a complete layman but i expect it do by next half month or year or so. And please clarify me Websites are stored in servers. From my desktop, i want to broadcast some video, then i need to connect to webserver(to upstream the video). Is there an application to do that (or do i have to code that or embed in my web application and which programming language would be suitable(does python support that))? and i also need a script to handle the upstreamed video or audio(can i do that with php)?

    Read the article

  • Server 11.04 Bind9 Failure

    - by Gordon McIntosh
    After 3 weeks of trying to diagnose a failure of bind9, to start I have to admit I need assistance. Initially it was running correctly and when I made a modification to a bind file and re-run it an error msg came back rndc: 127.0.0.1#953 bind9 start fail It also fails to start on any other bind9 upgrade or required pkg. grep on mysql returns nothing now even if it is running, and dig on 127 only works when I have my phone connected as a modem. I don't have any more diagnostics that I know of, so some help would be appreciated as development has stopped.

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Alternatives to NSMutableArray for storing 2D grid - iOS Cocos2d

    - by SundayMonday
    I'm creating a grid-based iOS game using Cocos2d. Currently the grid is stored in an NSMutableArray that contains other NSMutableArrays (the latter are rows in the grid). This works ok and performance so far is pretty good. However the syntax feels bulky and the indexing isn't very elegant (using CGPoints, would prefer integer indices). I'm looking for an alternative. What are some alternatives data structures for 2D arrays in this situation? In my game it's very common to add and remove rows from the bottom of the grid. So the grid might start off 10x10, grow to 17x10, shrink to 8x10 and then finally end with 2x10. Note the column count is constant. I've consider using a vector<vector<Object*>>. Also I'm vaguely aware of some type of "fast array" or similar offered by Cocos2d. I'd just like to learn about best practices from other developers!

    Read the article

  • File system implementation in MongoDB with GridFS

    - by Ralph
    I am working on two projects that will both implement a Webdav server backed by a MongoDB GridFS. In each case, there is the potential for the system to store tens of millions of files spread across thousands of hierarchical directories. I can come up with two different ways of storing the directory structure: As a "true" hierarchical file system, with directories containing the IDs (_id) of subdirectories and regular files. The paths will be separated by slashes (/) as in a POSIX-compliant file system. The path /a/b/c will be represented as a directory a containing a directory b containing a file c. As a flat file system, where file names include the slashes. The path /a/b/c will be stored as a single file with the name /a/b/c What are the advantages and disadvantages of each, with respect to a "real" folder-based file system?

    Read the article

  • Why get dedicated hosting? [closed]

    - by user176105
    Possible Duplicate: How to find web hosting that meets my requirements? I just finished writing a website and I'm about to publish it. I was looking at hosting options and I was about to get regular hosting from godaddy, which is about $6 a month with unlimited bandwidth, 150 gb of data, 500 emails and 25 mysql databases. The other option is dedicated servers, which range a lot in price, but are around $200 a month. Why would someone choose dedicated servers? Is it becuase they max the limits of regular hosting or is it because the ram/cpu is shared on regular hosting? If the latter, what will happen if a lot of users come to my site and max the ram/cpu?

    Read the article

  • Is it convenient to use a XML/JSON generated based system? [closed]

    - by Marcelo de Assis
    One of my clients insists that we missed a requisite for the project(a little social network, using PHP + MySql), is that all queries are made from XML / JSON static files (using AJAX) and not directly from the database. Edit: The main reason, stated by client, is a way to economize bandwith. Even the file loading, has to be using AJAX, to avoid server side processing. We will still use a database to store and insert data. These XML / JSON files will be (re) generated whenever any data is changed by CMS or users. As the project was developed without this "technique", we'll have a lot of work ahead, so I would like to avoid that. I'm asking if it's convenient to use a XML generated based system, because I think a database is already "optimized" and secure to deal with a lot of data traffic. Other issue I'm afraid of, is a chance of conflict when a user is trying to read a XML/JSON which is being just generated.

    Read the article

  • Export Foxbase database tables to csv

    - by RKS
    I have no experience with Foxbase whatsoever and I'm used to working with MySQL via phpmyadmin or interfaces like that. My company has a third party database we're trying to move away from, but we have no support from the company. The database is on our servers, but in a foxbase format. What kind tools do I need to convert these into other formats, or is there any type of admin UI I can tell them to use to export to csv or anything like that? Basically I'm asking how to export the foxbase tables to csv. Sorry if this question isn't clear or you need more information. I will edit with anything else you need.

    Read the article

  • What is the relationship between the business logic layer and the data access layer?

    - by Matt Fenwick
    I'm working on an MVC-ish app (I'm not very experienced with MVC, hence the "-ish"). My model and data access layer are hard to test because they're very tightly coupled, so I'm trying to uncouple them. What is the nature of the relationship between them? Should just the model know about the DAL? Should just the DAL know about the model? Or should both the model and the DAL be listeners of the other? In my specific case, it's: a web application the model is client-side (javascript) the data is accessed from the back-end using Ajax persistence/back-end is currently PHP/MySQL, but may have to switch to Python/GoogleDataStore on the GAE

    Read the article

  • How can I choose between Linux and Windows hosting?

    - by Mohamad
    I am a relative beginner when it comes to choosing web servers and hosting plans. I'm about to signup for a hosting plan with GoDaddy. My main requirement is ColdFusion and MySQL. The plans on offer include Linux and Windows based plans. Which one should I choose, and why? I don't have a lot of requirements other than what I mentioned above. I never used Linux before but I doubt I'll ever need to do anything beyond tampering with my account. What are the main advantages of one over the other?

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Pros and cons of the Google font API

    - by Seamus
    I am currently using a font from the google font directory on my website. I don't fully understand how it works, but it seems like when someone opens my site, their browser is told to go and fetch the font from Google. (correct me if I'm wrong). Now, what I'm wondering is, what are the pros and cons of this over just specifying a font family the old-school way? Presumably doing it the google font directory way has the advantage that they'll definitely see the font I want them to. (as long as the font directory is up). But does this way have disadvantages? Maybe using fonts that are stored locally speeds up the site loading?

    Read the article

  • Text reverses on remote gnome session

    - by Andrew Stern
    I have two computers running 10.4 . The first machine is a wired desktop with sshd. The second is a wifi connected laptop with the ssh client. When I use my laptop to bring up a remote gnome session to my desktop all the text gets reversed. Steps: 1) login as a user on the laptop to activate the wifi with a stored key. 2) goto a console Ctrl-Alt F1 3) do a xterm -- :1 to bring up a blank graphic session 4) ssh -Y user@desktopmachine gnome-session This shows reversed text and messes up the keyboard so I can't type

    Read the article

  • Where to store shaders

    - by Mark Ingram
    I have an OpenGL renderer which has a Scene member variable. The Scene object can contain N SceneObjects. I use these SceneObjects for storing the vertex position and any transforms. My question is, where should shaders be stored in this arrangement? I guess they need to be in a central location because multiple objects can use the same shader. But then each object needs access to the shader because it needs to set attributes into the shader. Does anyone have any advice?

    Read the article

  • Server for online browser game

    - by Tim Rogers
    I am going to be making an online single player browser game. The online element is needed so that a player can login and store the state of their game. This will include things like what buildings have been made and where they have been positioned as well as the users personal statistics and achievements. At this point in time, I am expecting all of the game logic to be performed client side So far, I am thinking I will use flash for creating the client side of the game. I am also creating a MySQL database to store all the users information. My question is how do I connect the two. Presumably I will need some sort of server application which will listen for incoming requests from any clients, perform the SQL query and then return the data. Does anyone have any recommendations of what technology/language to use?

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >