Search Results

Search found 60427 results on 2418 pages for 'data transfer'.

Page 617/2418 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • glTexImage2D behavior on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. I suspect that the use of glTexImage2D with one or two frames of look-ahead, that is, using several textures so that the glTexImage2D call can be initiated a frame or two ahead, will allow for sufficient parallelism to play in real time if the system is capable of it at all. Is my assumption true that the driver will handle the actual data transfer to the hardware asynchronously after glTexImage2D returns, assuming I don't try to use the texture or call glFinish/glFlush? Is there a better way to do this with OpenGL ES? Thank you very much, Sean

    Read the article

  • asp.net postback with jquery?

    - by mark smith
    Hi there, Can anyone help, I have a asp.net button but recently replaced it with a standard html button ... What i need to do is postback to an asp.net page and ensure a method is called the button before was an asp.net button so i had this event Protected Sub btnCancelar_Click(ByVal sender As Object, ByVal e As System.EventArgs) UtilTMP.DisposeObjects() Server.Transfer("~\Forms\test.aspx", True) End But i was using a button with javascript ALERT and recently changed to Jquery UI Model dialog but it doesn't wait for me to answer the question.. the postback happenes immediatly ... so i decided to change to a standard HTML button ... but i need to postback to the asp.net page and call a method like. If i just postback it won't call the cleanup Protected Sub Cleanup() UtilTMP.DisposeObjects() Server.Transfer("~\Forms\test.aspx", True) End

    Read the article

  • ASP.NET MVC ajax chat

    - by nccsbim071
    I built an ajax chat in one of my mvc website. everything is working fine. I am using polling. At certain interval i am using $.post to get the messages from the db. But there is a problem. The message retrieved using $.post keeps on repeating. here is my javascript code and controller method. var t; function GetMessages() { var LastMsgRec = $("#hdnLastMsgRec").val(); var RoomId = $("#hdnRoomId").val(); //Get all the messages associated with this roomId $.post("/Chat/GetMessages", { roomId: RoomId, lastRecMsg: LastMsgRec }, function(Data) { if (Data.Messages.length != 0) { $("#messagesCont").append(Data.Messages); if (Data.newUser.length != 0) $("#usersUl").append(Data.newUser); $("#messagesCont").attr({ scrollTop: $("#messagesCont").attr("scrollHeight") - $('#messagesCont').height() }); $("#userListCont").attr({ scrollTop: $("#userListCont").attr("scrollHeight") - $('#userListCont').height() }); } else { } $("#hdnLastMsgRec").val(Data.LastMsgRec); }, "json"); t = setTimeout("GetMessages()", 3000); } and here is my controller method to get the data: public JsonResult GetMessages(int roomId,DateTime lastRecMsg) { StringBuilder messagesSb = new StringBuilder(); StringBuilder newUserSb = new StringBuilder(); List<Message> msgs = (dc.Messages).Where(m => m.RoomID == roomId && m.TimeStamp > lastRecMsg).ToList(); if (msgs.Count == 0) { return Json(new { Messages = "", LastMsgRec = System.DateTime.Now.ToString() }); } foreach (Message item in msgs) { messagesSb.Append(string.Format(messageTemplate,item.User.Username,item.Text)); if (item.Text == "Just logged in!") newUserSb.Append(string.Format(newUserTemplate,item.User.Username)); } return Json(new {Messages = messagesSb.ToString(),LastMsgRec = System.DateTime.Now.ToString(),newUser = newUserSb.ToString().Length == 0 ?"":newUserSb.ToString()}); } Everything is working absloutely perfect. But i some messages getting repeated. The first time page loads i am retrieving the data and call GetMessages() function. I am loading the value of field hdnLastMsgRec the first time page loads and after the value for this field are set by the javascript. I think the message keeps on repeating because of asynchronous calls. I don't know, may be you guys can help me solve this. or you can suggest better way to implement this.

    Read the article

  • Passing Certificate to Svcutil to generate proxy for OSB Service

    - by webwires
    We are wanting to implement Two-Way SSL security from WCF to OSB Services. We have successfully deployed the certificates so that when you browse to the service with IE you get the appropriate prompt for certificate and then it takes you immediately to the WSDL. But, when you attempt to generate a proxy using svcutil as defined in steps 8 and 9 in this MSDN article. http://msdn.microsoft.com/en-us/library/cc949005.aspx I get the error: A reply message was received for operation 'Get' with action 'http://schemas.xmlsoap.org/ws/2004/09/transfer/Get'. However, your client code requires action 'http://schemas.xmlsoap.org/ws/2004/09/transfer/GetResponse'. The OSB services are set to use Soap 1.2 and the svcutil.exe.config we use is identicle to the article except for the findValue and x509FindType. Instead we used the FindByThumbprint pointing to the "My" store name and "CurrentUser" store location. The cert is there and is the same cert we select from the IE prompt.

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • GZipStream or DeflateStream class?

    - by Seventh Element
    The MSDN documentation tells me the following: The GZipStream class uses the gzip data format, which includes a cyclic redundancy check value for detecting data corruption. The gzip data format uses the same compression algorithm as the DeflateStream class. It seems GZipStream adds some extra data to the output (relative to DeflateStream). I'm wondering, in what type of a scenario would it be essential to use GZipStream and not DeflateStream?

    Read the article

  • setResult does not work when BACK button pressed.

    - by alex2k8
    I am trying to setResult after the BACK button was pressed. I call in onDestroy Intent data = new Intent(); setResult(RESULT_OK, data) But when it comes to onActivityResult(int requestCode, int resultCode, Intent data) the resultCode is 0 (RESULT_CANCELED) and data is 'null'. So, how can I pass result from activity terminated by BACK button?

    Read the article

  • Mysql password hashing method old vs new

    - by The Disintegrator
    I'm trying to connect to a mysql server at dreamhost from a php scrip located in a server at slicehost (two different hosting companies). I need to do this so I can transfer new data at slicehost to dreamhost. Using a dump is not an option because the table structures are different and i only need to transfer a small subset of data (100-200 daily records) The problem is that I'm using the new MySQL Password Hashing method at slicehost, and dreamhost uses the old one, So i get $link = mysql_connect($mysqlHost, $mysqlUser, $mysqlPass, FALSE); Warning: mysql_connect() [function.mysql-connect]: OK packet 6 bytes shorter than expected Warning: mysql_connect() [function.mysql-connect]: mysqlnd cannot connect to MySQL 4.1+ using old authentication Warning: mysql_query() [function.mysql-query]: Access denied for user 'nodari'@'localhost' (using password: NO) facts: I need to continue using the new method at slicehost and i can't use an older php version/library The database is too big to transfer it every day with a dump Even if i did this, the tables have different structures I need to copy only a small subset of it, in a daily basis (only the changes of the day, 100-200 records) Since the tables are so different, i need to use php as a bridge to normalize the data Already googled it Already talked to both support stafs The more obvious option to me would be to start using the new MySQL Password Hashing method at dreamhost, but they will not change it and i'm not root so i can't do this myself. Any wild idea? By VolkerK sugestion: mysql> SET SESSION old_passwords=0; Query OK, 0 rows affected (0.01 sec) mysql> SELECT @@global.old_passwords,@@session.old_passwords, Length(PASSWORD('abc')); +------------------------+-------------------------+-------------------------+ | @@global.old_passwords | @@session.old_passwords | Length(PASSWORD('abc')) | +------------------------+-------------------------+-------------------------+ | 1 | 0 | 41 | +------------------------+-------------------------+-------------------------+ 1 row in set (0.00 sec) The obvious thing now would be run a mysql SET GLOBAL old_passwords=0; But i need SUPER privilege to do that and they wont give it to me if I run the query SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); I get the error ERROR 1044 (42000): Access denied for user 'nodari'@'67.205.0.0/255.255.192.0' to database 'mysql' I'm not root... The guy at dreamhost support insist saying thet the problem is at my end. But he said he will run any query I tell him since it's a private server. So, I need to tell this guy EXACTLY what to run. So, telling him to run SET SESSION old_passwords=0; SET GLOBAL old_passwords=0; SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); grant all privileges on *.* to nodari@HOSTNAME identified by 'new password'; would be a good start?

    Read the article

  • How does Windows LIve ID work?

    - by Morgan Cheng
    I happens to find this nice article explaining how OpenID works. Clearly, OpenID consumer and OpenID server transfer information through URL query string. I'm wondering how Live ID accomplish similar functionality. It seems the info is not exchanged through query string in URL. And, since Live ID login server have different domain name from consumer domain, it is not applicable to transfer info through cookie. I tried to google tutorial of Live ID, but the result is full of jargon and hard to understand. Is there any easy-to-understand tutorial about How Live ID works?

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • i18n with webpy

    - by translation..
    Hello, i Have a problem with i18n, using webpy. I have followed this : http://webpy.org/cookbook/i18n_support_in_template_file So, in my .wsgi there is : #i18n gettext.install('messages',I18N_PATH,unicode=True) gettext.translation('messages',I18N_PATH,languages=['fr_FR','en_US']).install(True) So i ran : pygettext.py -a -v -d messages -o i18n/messages.po controllers/*.py views/*.html I have copied and translated messages.po, I have also change the "content-type" and the "content-transfer-encoding: "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: UTF-8\n" And i ran this command: msgfmt -v -o i18n/fr_FR/LC_MESSAGES/messages.mo i18n/fr_FR/LC_MESSAGES/messages.po >>>93 messages traduits. here is the arborescence of i18n folder: i18n/: en_US fr_FR messages.po i18n/en_US: LC_MESSAGES i18n/en_US/LC_MESSAGES: messages.mo messages.po i18n/fr_FR: LC_MESSAGES i18n/fr_FR/LC_MESSAGES: messages.mo messages.po But when i go in my website (my browser's language is "fr_fr"), i haven't the string translated. And I don't know why. Anyone has an idea? Thanks

    Read the article

  • Can we use JCR API over MySQL?

    - by n_mysql
    Apache Jackrabbit (or the JCR API) helps you separate the data store from the data management system. This would mean that every data store provider would have to implement the JCR API for his own data store. The question is JCR implemented for MySQL? Can we use the JCR API over MySQL? I want to truly abstract out where i store my content, so that tomorrow if i don't want to use a relational DB i can swap it out with the file system with ease.

    Read the article

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Can someone explain RAID-0 in plain English?

    - by Edward Tanguay
    I've heard about and read about RAID throughout the years and understand it theoretically as a way to help e.g. server PCs reduce the chance of data loss, but now I am buying a new PC which I want to be as fast as possible and have learned that having two drives can considerably increase the perceived performance of your machine. In the question Recommendations for hard drive performance boost, the author says he is going to RAID-0 two 7200 RPM drives together. What does this mean in practical terms for me with Windows 7 installed, e.g. can I buy two drives, go into the device manager and "raid-0 them together"? I am not a network administrator or a hardware guy, I'm just a developer who is going to have a computer store build me a super fast machine next week. I can read the wikipedia page on RAID but it is just way too many trees and not enough forest to help me build a faster PC: RAID-0: "Striped set without parity" or "Striping". Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Inexpensive Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple strip sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss. So in plain English, how can "RAID-0" help me build a faster Windows-7 PC that I am going to order next week?

    Read the article

  • Google Chrome audit on caching

    - by Álvaro G. Vicario
    If I run an audit on my sites with Google Chrome, I get this message in the Leverage browser caching section: The following resources are missing a cache expiration. Resources that do not specify an expiration may not be cached by browsers: A list of all the pictures follows. I get a similar notice in Leverage proxy caching: Consider adding a "Cache-Control: public" header to the following resources: Apart from pictures, I also get a notice about HTML, CSS and JavaScript files: The following resources are explicitly non-cacheable. Consider making them cacheable if possible: Its funny because I've worked hard to cache all static contents (except for pictures, where I just left Apache's default settings). Firefox does indeed store all these items in cache. Is there anything I should improve in my HTTP headers? Here's the complete header set of some items as loaded after removing the browser caché. Pictures use default settings I didn't really check before, the rest should be cachéd for three hours. I can set headers with both .htaccess and PHP. PNG HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:14 GMT Server: Apache Last-Modified: Thu, 18 Mar 2010 21:40:54 GMT Etag: "c48024-230-4821a15d6c580" Accept-Ranges: bytes Content-Length: 560 Keep-Alive: timeout=4 Connection: Keep-Alive Content-Type: image/png HTML HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:13 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:46:13 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Wed, 24 Mar 2010 20:30:36 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-15 CSS HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/css JavaScript HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-javascript Update I've tested Jumby's suggestion and set my CSS's expire to 1 year: Cache-Control:max-age=31536000, s-maxage=31536000, must-revalidate, proxy-revalidate Connection:Keep-Alive Content-Encoding:gzip Content-Length:4198 Content-Type:text/css Date:Mon, 02 Aug 2010 20:48:56 GMT Expires:Tue, 02 Aug 2011 20:48:56 GMT Keep-Alive:timeout=5, max=99 Last-Modified:Thu, 18 Mar 2010 20:40:12 GMT Server:Apache/2.2.14 (Win32) PHP/5.3.1 Vary:Accept-Encoding X-Powered-By:PHP/5.3.1 However, Chrome still claims "explicitly non-cacheable".

    Read the article

  • Matrix inversion in OpenCL

    - by buchtak
    Hi, I am trying to accelerate some computations using OpenCL and part of the algorithm consists of inverting a matrix. Is there any open-source library or freely available code to compute lu factorization (lapack dgetrf and dgetri) of matrix or general inversion written in OpenCL or CUDA? The matrix is real and square but doesn't have any other special properties besides that. So far, I've managed to find only basic blas matrix-vector operations implementations on gpu. The matrix is rather small, only about 60-100 rows and cols, so it could be computed faster on cpu, but it's used kinda in the middle of the algorithm, so I would have to transfer it to host, calculate the inverse, and then transfer the result back on the device where it's then used in much larger computations.

    Read the article

  • How to Pass byte array as parameter in url in iPhone?

    - by Jindal
    I am using following code to get bytes array. thx to this post. Data *data = [NSData dataWithContentsOfFile:filePath]; NSUInteger len = [data length]; Byte *byteData = (Byte*)malloc(len); memcpy(byteData, [data bytes], len); it is the right method to do so? Now, How i can pass bytes array into url? Thank You for Help,

    Read the article

  • load google annotated chart within jquery ui tab content via ajax method

    - by twmulloy
    Hi, I am encountering an issue with trying to load a google annotated chart (http://code.google.com/apis/visualization/documentation/gallery/annotatedtimeline.html) within a jquery ui tab using content via ajax method (http://jqueryui.com/demos/tabs/#ajax). If instead I use the default tabs functionality, writing out the code things work fine: <div id="tabs"> <ul> <li><a href="#tabs-1">Chart</a></li> </ul> <div id="tabs-1"> <script type="text/javascript"> google.load('visualization', '1', {'packages':['annotatedtimeline']}); google.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('date', 'Date'); data.addColumn('number', 'cloudofinc.com'); data.addColumn('string', 'header'); data.addColumn('string', 'text') data.addColumn('number', 'All Clients'); data.addRows([ [new Date('May 12, 2010'), 2, '2 New Users', '', 3], [new Date('May 13, 2010'), 0, undefined, undefined, 0], [new Date('May 14, 2010'), 0, undefined, undefined, 0], ]); var chart = new google.visualization.AnnotatedTimeLine(document.getElementById('chart_users')); chart.draw(data, { displayAnnotations: false, fill: 10, thickness: 1 }); } </script> <div id='chart_users' style='width: 100%; height: 400px;'></div> </div> </div> But if I use the ajax method for jquery ui tab and point to the partial for the tab, it doesn't work completely. The page renders and once the chart loads, the browser window goes white. However, you can see the tab partial flash just before the chart appears to finish rendering (the chart never actually displays). I have verified that the partial is indeed loading properly without the chart. <div id="tabs"> <ul> <li><a href="ajax/tabs-1">Chart</a></li> </ul> </div>

    Read the article

  • Faster Matrix Multiplication in C#

    - by Kyle Lahnakoski
    I have as small c# project that involves matrices. I am processing large amounts of data by splitting it into n-length chunks, treating the chucks as vectors, and multiplying by a Vandermonde** matrix. The problem is, depending on the conditions, the size of the chucks and corresponding Vandermonde** matrix can vary. I have a general solution which is easy to read, but way too slow: public byte[] addBlockRedundancy(byte[] data) { if (data.Length!=numGood) D.error("Expecting data to be just "+numGood+" bytes long"); aMatrix d=aMatrix.newColumnMatrix(this.mod, data); var r=vandermonde.multiplyBy(d); return r.ToByteArray(); }//method This can process about 1/4 megabytes per second on my i5 U470 @ 1.33GHz. I can make this faster by manually inlining the matrix multiplication: int o=0; int d=0; for (d=0; d<data.Length-numGood; d+=numGood) { for (int r=0; r<numGood+numRedundant; r++) { Byte value=0; for (int c=0; c<numGood; c++) { value=mod.Add(value, mod.Multiply(vandermonde.get(r, c), data[d+c])); }//for output[r][o]=value; }//for o++; }//for This can process about 1 meg a second. (Please note the "mod" is performing operations over GF(2^8) modulo my favorite irreducible polynomial.) I know this can get a lot faster: After all, the Vandermonde** matrix is mostly zeros. I should be able to make a routine, or find a routine, that can take my matrix and return a optimized method which will effectively multiply vectors by the given matrix, but faster. Then, when I give this routine a 5x5 Vandermonde matrix (the identity matrix), there is simply no arithmetic to perform, and the original data is just copied. ** Please note: What I use the term "Vandermonde", I actually mean an Identity matrix with some number of rows from the Vandermonde matrix appended (see comments). This matrix is wonderful because of all the zeros, and because if you remove enough rows (of your choosing) to make it square, it is an invertible matrix. And, of course, I would like to use this same routine to convert any one of those inverted matrices into an optimized series of instructions. How can I make this matrix multiplication faster? Thanks! (edited to correct my mistake with Vandermonde matrix)

    Read the article

  • CodeIgniter global variable

    - by Shishant
    Hello, I am using $data in all my views $this->load->view('my_view', $data); I have also autoload a Controller following this guide Extending Core Controller But I want to make $data global because in views there is a sidebar which is constant for whole project and displays info fetched through db in autoloaded controller Currently I have to manually write $data['todo'] for each and fetch info from autoloaded model. Thank You.

    Read the article

  • How to get Acknowlegement in TCP communication in Java

    - by Sunil Kumar Sahoo
    I have written a socket program in Java. Both server and client can sent/receive data to each other. But I found that if client sends data to server using TCP then internally TCP sends acknowledgement to the client once the data is received by the server. I want to detect or handle that acknowledgement. How can I read or write data in TCP so that I can handle TCP acknowledgement. Thanks Sunil Kumar Sahoo

    Read the article

  • Hosting solution for images for website written in PHP

    - by tomaszs
    I've written a website in PHP and it will have ability for users to upload images. My website will have more than 100.000 users. Aprox. 1k users will upload image about 50 KB. And every image will be displayed on this website 5k times so it's transfer of: 1k x 50 KB x 5k = 250 GB per month. So my question is: Do you know any good solution (hosting or CDN network or else) that: will be payed for transfer not space used and no entrance fee will have API to upload images easily with PHP is extremely easy to use will be good for low budget will not require any special, complicated registration and formal things will allow commercial use will allow using this images in website layout ?

    Read the article

  • Query with Two Different DSN

    - by morant
    I have a query: The "X" tables are from one data source and the "Y" table is from another...but there is join between the data sources. I can't seem to figure out how to enter the two different connection strings for this to run. ConnectionString="Dsn=Xdb;uid=xxx;pwd=xxxxxxx" ProviderName="System.Data.Odbc" ConnectionString="Dsn=Ydb;uid=xxx;pwd=xxxxxxx" ProviderName="System.Data.Odbc" Is this possible...am I just missing something?

    Read the article

  • Memory Mapped Files .NET

    - by CSharpAtl
    I have a project and it needs to access a large amount of proprietary data in ASP.NET. This was done on the Linux/PHP by loading the data in shared memory. I was wondering if trying to use Memory Mapped Files would be the way to go, or if there is a better way with better .NET support. I was thinking of using the Data Cache but not sure of all the pitfalls of size of data being saved in the Cache.

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >