Search Results

Search found 2287 results on 92 pages for 'reads'.

Page 49/92 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • audio stream sampling rate in linux

    - by farhan
    Im trying read and store samples from an audio microphone in linux using C/C++. Using PCM ioctls i setup the device to have a certain sampling rate say 10Khz using the SOUND_PCM_WRITE_RATE ioctl etc. The device gets setup correctly and im able to read back from the device after setup using the "read". int got = read(itsFd, b.getDataPtr(), b.sizeBytes()); The problem i have is that after setting the appropriate sampling rate i have a thread that continuously reads from /dev/dsp1 and stores these samples, but the number of samples that i get for 1 second of recording are way off the sampling rate and always orders of magnitude more than the set sampling rate. Any ideas where to begin on figuring out what might be the problem?

    Read the article

  • Is there a function that can read a php function post-parsing?

    - by Rob
    I've got a php file echoing hashes from a MySQL database. This is necessary for a remote program I'm using, but at the same time I need my other php script opening and checking it for specified strings POST parsing. If it checks for the string pre-parsing, it'll just get the MySQL query rather than the strings to look for. I'm not sure if any functions do this. Does fopen() read the file prior to parsing? or file_get_contents()? If so, is there a function that'll read the file after the php and mysql code runs? The file with the hashes query and echo is in the same directory as the php file reading it, if that makes a difference. Perhaps fopen reads it post-parse, and I've done something wrong, but at first I was storing the hashes directly in the file, and it was working fine. After I changed it to echo the contents of the MySQL table, it bugged out.

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • ssis package from SQL agent failed

    - by Pramodtech
    I have simple package which reads data from csv file and loads into SQL table. File is located on another server and it is shared. I use UNC path in package. package is scheduled using sql agent job. Job worked fine for 1 week and suddenly started giving error "The file name "\\124.0.48.173\basel2\Commercial\Input\ACBS_GSU.csv" specified in the connection was not valid. End Error Error: 2010-04-20 16:15:07.19 Code: 0xC0202070 Source: ACBS_GSU Connection manager "CSV file conection" Description: Connection "CSV file conection" failed validation." Any help will be appreciated.

    Read the article

  • Multi-Threaded Application - Help with some pseudo code!!

    - by HonorGod
    I am working on a multi-threaded application and need help with some pseudo-code. To make it simpler for implementation I will try to explain that in simple terms / test case. Here is the scenario - I have an array list of strings (say 100 strings) I have a Reader Class that reads the strings and passes them to a Writer Class that prints the strings to the console. Right now this runs in a Single Thread Model. I wanted to make this multi-threaded but with the following features - Ability to set MAX_READERS Ability to set MAX_WRITERS Ability to set BATCH_SIZE So basically the code should instantiate those many Readers and Writers and do the work in parallel. Any pseudo code will really be helpful to keep me going!

    Read the article

  • [C++] Is it possible to use threads to speed up file reading ?

    - by Mister Mystère
    Hi there, I want to read a file as fast as possible (40k lines) [Edit : the rest is obsolete]. Edit: Andres Jaan Tack suggested a solution based on one thread per file, and I want to be sure I got this (thus this is the fastest way) : One thread per entry file reads it whole and stocks its content in a container associated (- as many containers as there are entry files) One thread calculates the linear combination of every cell read by the input threads, and stocks the results in the exit container (associated to the output file). One thread writes by block (every 4kB of data, so about 10 lines) the content of the output container. Should I deduce that I must not use m-mapped files (because the program's on standby waiting for the data) ? Thanks aforehand. Sincerely, Mister mystère.

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Sharing some info with all DLLs pulled into a process

    - by JBRWilkinson
    Hi all, We've got an Enterprise system which has many processes (EXEs, services, DCOM servers, COM+ apps, ISAPI, MMC snapins) all of which make use of many COM components. We've recently seen failures in some of the customer deployments, but are finding it hard to troubleshoot the cause. In order to track down the problem, we've augmented the entire source with logging statements where errors occur. In order to identify which logs came from what processes, the C++ logging code (compiled into all components) uses the EXE name to name the log. This is good for some cases, but not all - COM+ apps, ISAPI and MMC snapins all have system EXE names and the logs end up interleaved. I saw this post about shared data sections which might help, but what I don't understand is who decides what goes in the shared section. Is there any way I can guarantee that a particular piece of code writes into the shared section before anyone else reads it? Or is there a better solution to this problem?

    Read the article

  • How to implement a mailing system with Rails that sends emails in the background

    - by Tam
    I want to implement a reliable mailing system with Ruby on Rails that sends emails in the background as sending email sometimes takes like 10 seconds or more so I don't want the user to wait. Some ideas I thought of: 1- Write to a table in DB a have a background process that go over and send email (concern: potential many reads/writes to DB slows down my application) 2- Messaging Queue background process / Rake task (concern: if server crashes queued mails will be lost also might eat up a lot of memory if many emails) I was wondering if you a know of a good solution that provides a balance between reliability and performance.

    Read the article

  • Bookmarkabale ajax calls with MVC routing

    - by devzero
    I have a page with a menu that uses JQuery AJAX calls to populate the page with. To reflect any changes I update the URL with a #... instead of ?... or /... So an URL that originally reads : htpp://localhost/pages/index/id=1 would look like : http://localhost/#pages/index/id=1. If a user bookmarks this, and later comes back to the page, I wonder if it's possible to use the second URL in my route decoding, or if I have to load it blank, then use the same JS/Ajax to populate the page? In my mind it is problematic to use Ajax in these cases if a user copies the link and mails it to a friend with JavaScript disabled. edit#1: Fixed some spelling.

    Read the article

  • Reading and writing to SysV shared memory without synchronization (use of semaphores, C/C++, Linux)

    - by user363778
    Hi, I use SysV shared memory to let two processes communicate with each other. I do not want the code to become to complex so I wondered if I really had to use semaphores to synchronize the access to the shared memory. In my C/C++ program the parent process reads from the shared memory and the child process writes to the shared memory. I wrote two test applications to see if I could produce some kind of error like a segmentation fault, but I couldn't (Ubuntu 10.04 64bit). Even two processes writing non stop in a while loop to the same shared memory did not produce any error. I hope someone has experience concerning this matter and can tell me if I really must use semaphores to synchronize the access or if I am OK without synchronization. Thanks

    Read the article

  • Converting between unsigned and signed int safely

    - by polemic
    I have an interface between a client and a server where a client sends (1) an unsigned value, and (2) a flag which indicates if value is signed/unsigned. Server would then static cast unsigned value to appropriate type. I later found out that this is implementation defined behavior and I've been reading about it but I couldn't seem to find an appropriate solution that's completely safe? I've read about type punning, pointer conversions, and memcpy. Would simply using a union type work? A UnionType containing signed and unsigned int, along with the signed/unsigned flag. For signed values, client sets the signed part of the union, and server reads the signed part. Same for the unsigned part. Or am I completely misunderstanding something? Side question: how do I know the specific behavior in this case for a specific scenario, e.g. windriver diab on PPC? I'm a bit lost on how to find such documentation.

    Read the article

  • [macosx]does dlopen call open and read functions?

    - by zbencik
    Hello, I've intercepted(interposed) dlopen function under MacOS X and some other functions. I see how my applications calls dlopen in the log, but don't find anything related to open/read functions after dynamic library was dlopened. How does the system accesses and reads the dynamic library file? I've looked at the source code of dyld, and it does call open/read on dlopen. Can anybody let me know what I'm missing? intercepted functions: dlopen, open, read, write, access, all stat functions, close, etc. thanks, any help is highly appreciated.

    Read the article

  • How do i change the Scala version that sbt works with?

    - by ashy_32bit
    Firing up the SBT console it reads : [info] Building project AYLIEN 1.0 against Scala 2.8.1 [info] using MyProject with sbt 0.7.4 and Scala 2.7.7 How can I make it use MyProject with sbt 0.7.4 and Scala 2.8.1 ? Please pay attenetion that I'm not asking about the Scala version that is used to build my project (it is the 2.8.1 as you can see), but I rather want to make sbt use MyProject with Scala 2.8.1. Apparently sbt uses it's own scala version to work with project definition (MyProject here) which is different than one it uses to actually build the project! or perhaps I'm missing something ... ?

    Read the article

  • Fill data gaps - UNION, PARTITION BY, or JOIN?

    - by Dave Jarvis
    Problem There are data gaps that need to be filled. Would like to avoid UNION or PARTITION BY if possible. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Data Sources The severity codes and incident type codes are from: severity_vw incident_type_vw The columns are: incident_tally severity_cd incident_typ_cd Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Question How would you use UNION, PARTITION BY, or LEFT JOIN to fill in the zero counts?

    Read the article

  • Can in-memory SQLite databases be used concurrently?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • how can user_DEPENDENCIES read from the procedure

    - by Moudiz
    If I run this query : SELECT DISTINCT U.REFERENCED_NAME, U.REFERENCED_TYPE FROM USER_DEPENDENCIES U where U.name IN('P_CREATE_T') It will give me : U.REFERENCED_NAME | U.REFERENCED_TYPE random_name_table | table If I drop this table random_name_table : drop table random_name_table and I run the dependecie query It will give me this: U.REFERENCED_NAME | U.REFERENCED_TYPE BIN$6WfJh8MWWGngQ3ATqMDOpQ==$0 | table I know the result is related to recycle bin, But what I am asking is there a way that shows the table even if its droped ? I mean shouldnt the depency query read from the procedure and not from the database ? If not is there a query that reads from the procedure and not from database ? Edit ok I will make it clear : my question USER_DEPENDENCIES read from the procedure or the database ? My second question does the recycle bin always shows ? I mean is there times where the result of the recylebin disapear ?

    Read the article

  • Why is the Twitter Bootstrap "fixed" layout NOT fixed?

    - by leonel
    The Twitter Bootstrap site reads as follows: The default and simple 940px-wide, centered layout for just about any website or page provided by a single <div class="container">. Quote from http://twitter.github.com/bootstrap/scaffolding.html#layouts That's exactly what I have in my HTML but when I inspect the element, I see this CSS apply to it: .container, .navbar-fixed-top .container, .navbar-fixed-bottom .container { width: 1170px; } By the way, if I override that CSS rule by adding... div.container{ width:940px; } Then the elements inside the div.container are wider than the div.container itself and look out of place. So, why is the Twitter Bootstrap "fixed" layout NOT fixed? and how can I make it fixed?

    Read the article

  • How does one SELECT block another?

    - by Krip
    I'm looking at output of SP_WhoIsActive on SQL Server 2005, and it's telling me one session is blocking another - fine. However they both are running a SELECT. How does one SELECT block another? Shouldn't they both be acquiring shared locks (which are compatible with one another)? Some more details: Neither session has an open transaction count - so they are stand-alone. The queries join a view with a table. They are complex queries which join lots of tables and results in 10,000 or so reads. Any insight much appreciated.

    Read the article

  • Using javascript to access a json array from php

    - by celenius
    I'm trying to understand how my php script can pass an array to my javascript code. Using the following php, I pass an array: $c = array(3,2,7); echo json_encode($c); My javascript is as follows: $.post("getLatLong.php", { latitude: 500000}, function(data){ arrayData = data document.write(arrayData) document.write(arrayData[0]); document.write(arrayData[0]); document.write(arrayData[0]); }); </script> What is printed out on screen is [3,2,7][3, I'm trying to understand how json_encode works - I though I would be able to pass the array to a variable, and then access it like a normal javascript array, but it views my array as one large text string. How do ensure that it reads it like an array?

    Read the article

  • ASP.NET MCV 2 controller-url problems

    - by cc0
    I am still very new to the MVC framework, but I managed to create a controller that reads from a database and writes JSON to an url; host.com/Controllername?minValue=something&maxValue=something However when I move the site to a subfolder; host.com/mvc/ it doesn't seem to be able to call the controller from there when I do it like this; host.com/mvc/Procedure?minValue=something&maxValue=something Did I forget to do something somewhere to make this url call valid from that subfolder? Any help here would be greatly appreciated.

    Read the article

  • new >> how would i read a file that has 3 columns and each column contains 100 numbers into an array

    - by user320950
    int exam1[100];// array that can hold 100 numbers for 1st column int exam2[100];// array that can hold 100 numbers for 2nd column int exam3[100];// array that can hold 100 numbers for 3rd column void main() { ifstream infile; int num; infile.open("example.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++); // array numbers less than 100 { while(infile >> [exam]); // while reading get 1st array or element ???// how will i go read the next number infile >> num; } } infile.close(); }

    Read the article

  • Issues reading CSV file using OLEDB when filenamen have period

    - by Rodel Dagumampan
    Issues reading CSV file using OLEDB when filenamen have period. I have a code in C# that reads CSV File using OleDBProvider. It works perfect with filenames in regular format such as Budget.csv but failed when i renamed the file into Budget.DKK.csv or Budget.USD.csv I throws this exception: he Microsoft Jet database engine could not find the object 'Budget.DKK.csv'. Make sure the object exists and that you spell its name and the path name correctly. I have no idea so far why is this happenning.

    Read the article

  • Fast exchange of data between unmanaged code and managed code

    - by vizcaynot
    Hello: Without using p/invoke, from a C++/CLI I have succeeded in integrating various methods of a DLL library from a third party built in C. One of these methods retrieves information from a database and stores it in different structures. The C++/CLI program I wrote reads those structures and stores them in a List<, which is then returned to the corresponding reading and use of an application programmed completely in C#. I understand that the double handling of data (first, filling in several structures and then, filling all of these structures into a list<) may generate an unnecessary overload, at which point I wish C++/CLI had the keyword "yield". Depending on the above scenario, do you have recommendations to avoid or reduce this overload? Thanks.

    Read the article

  • ASP.NET MVC 2 controller-url problems

    - by cc0
    I am still very new to the MVC framework, but I managed to create a controller that reads from a database and writes JSON to an url; host.com/Controllername?minValue=something&maxValue=something However when I move the site to a subfolder; host.com/mvc/ it doesn't seem to be able to call the controller from there when I do it like this; host.com/mvc/Controllername?minValue=something&maxValue=something Did I forget to do something somewhere to make this url call valid from that subfolder? Any help here would be greatly appreciated.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >