Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 440/3203 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Howto convert to string and read data from TCP packet

    - by salime
    I used sharppcap to capture TCP packets. Now i wanna reconstruct HTTP packet from TCP packets but i don't know how. I read somewhere i can find start of HTTP packet in TCP data... i tried to convert byte[] TCP data to string using this code: string s = System.Text.Encoding.UTF8.GetString(tcp_pack.Data); but the string isn't readable. like a binary file that is opened with notepad. is it because the data is encrypted or code is incorrect? how can i reconstruct HTTP packet from TCP packets?

    Read the article

  • Codeigniter: Check where model is loaded from?

    - by qwerty
    I'm having the strangest thing ever right now. I "disabled" a model because i don't need to access it ever again. I did so by echoing out an error string in the __construct() and right after that die(). That way i would notice if i ever loaded the model, even by mistake, which should never happen. Alright? The problem is, the model is not loaded ANYWHERE in the entire project, but it still gets loaded for some reason. It's not auto loaded, that's for sure. Is it possible to track down where a model is loaded? Like which line it happens etc?

    Read the article

  • Any reason not to use USE_ETAGS with CommonMiddleware in Django?

    - by allyourcode
    The only reason I can think of is that calculating ETag's might be expensive. If pages change very quickly, the browser's cache is likely to be invalidated by the ETag. In that case, calculating the ETag would be a waste of time. On the other hand, a giving a 304 response when possible minimizes the amount of time spent in transmission. What are some good guidelines for when ETag's are likely to be a net winner when implemented with Django's CommonMiddleware?

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • finding out memory allocation hotspots in java

    - by Zamir
    Our GC is working hard and we have some pauses that we want to decrease. We have some memory allocation issues that we want to solve before or while we are tweaking with the actual JVM GC args. I would like to know which objects are making the GC sweat: is there a way to know which objects are evacuated every time the GC is working? is there a way to know which objects are moved between areas every time the GC is working? Is there a way to know which objects are in Eden area? I am working extensively with Jprofiler and Memory Analyzer. I would like to get this information on a running application in my staging environment.

    Read the article

  • What data is sent to Google Analytics?

    - by Darryl Hein
    Has anyone found any documentation or research about what data is transfered to Google Analytics when it's added to a site. The main thing I'm wondering about is post data, but the details of exactly what is sent would be useful. I'm considering implementing it on a sites that have a lot of private data on them. I'm wondering what data Google will capture, if any. (The sites are login only.) I'm needing proof so I can provided to the users.

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • C++, using one byte to store two variables

    - by 2di
    Hi All I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece) Doing it in that way, results in a overhead for accessing particular index of the board. Do you think that, this code can be optimised or completely different method of accessing indexes can be used? c++ char getPosition(unsigned char* c, int index){ //moving pointer c+=(index>>1); //odd number if (index & 1){ //taking right part return *c & 0xF; }else { //taking left part return *c>>4; } } void setValue(unsigned char* board, char value, int index){ //moving pointer board+=(index>>1); //odd number if (index & 1){ //replace right part //save left value only 4 bits *board = (*board & 0xF0) + value; }else { //replacing left part *board = (*board & 0xF) + (value<<4); } } int main() { char* c = (char*)malloc(32); for (int i = 0; i < 64 ; i++){ setValue((unsigned char*)c, i % 8,i); } for (int i = 0; i < 64 ; i++){ cout<<(int)getPosition((unsigned char*)c, i)<<" "; if (((i+1) % 8 == 0) && (i > 0)){ cout<<endl; } } return 0; } I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem. Thanks a lot

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • How to speed this kind of for-loop?

    - by wok
    I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox. So here is the code I have so far: imInput = imread('tire.tif'); n = 10; imMax = imInput(:, n:end); for i = 1:(n-1) imMax = max(imMax, imInput(:, i:end-(n-i))); end Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?

    Read the article

  • Dynamic Custom Fields for Data Model

    - by Jerry Deng
    I am in the process of creating a dynamic database where user will be able to create resource type where he/she can add custom fields (multiple texts, strings, and files) Each resource type will have the ability to display, import, export its data; I've been thinking about it and here are my approaches. I would love to hear what do you guys think. Ideas: just hashing all the custom data in a data field (pro: writing is easier, con: reading back out may be harder); children fields (the model will have multiple fields of strings, fields of text, and fields for file path); fixed number of custom fields in the same table with a key mapping data hash stored in the same row; Non-SQL approach, but then the problem would be generating/changing models on the fly to work with different custom fields;

    Read the article

  • Perl launched from Java takes forever

    - by Wade Williams
    I know this is an absolute shot in the dark, but we're absolutely perplexed. A perl (5.8.6) script run by Java (1.5) is taking more than an hour to complete. The same script, when run manually from the command line takes 12 minutes to complete. This is on a Linux host. Logging is the same in both cases and the script is run with the same parameters in both cases. The script does some complex stuff like Oracle DB access, some scp's, etc, but again, it does the exact same actions in both cases. We're stumped. Has anyone ever run into a similar situation? If not and if you were faced with the same situation, how would you consider debugging it?

    Read the article

  • How to effectively measure difference in a run-time.

    - by Knowing me knowing you
    Guys, in one of the excersises in TC++PL B.S. asks to: "Write a function that either returns a value or that throws that value based on an argument. Measure the difference in run-time between the two ways." Great pity he never explaines how to measure such things. I'm not sure if I'm suppose to write simple "time start, time end" counter or maybe there are more effective and practical ways to do it? Thanks in advance.

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

  • Codeigniter: Pass data to library funciton

    - by Kevin Brown
    I need my function to do one of two things based on the method variable, but I don't know how to get it done... My controller: function survey($method) { $id = $this->session->userdata('id'); $data['member'] = $this->home_model->getUser($id); $data['header'] = "Home"; $this->survey_form_processing->survey_form($this->_container,$data); } Library function: function survey_form($container,$method) { if($method == 1){ $this->CI->load->view($container,$data); } if($method == 2){ Do stuff... }

    Read the article

  • Will creating index help in this case

    - by The King
    I'm still a learning user of SQL-SERVER2005. Here is my table structure CREATE TABLE [dbo].[Trn_PostingGroups]( [ControlGroup] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [PracticeCode] [char](5) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ScanDate] [smalldatetime] NULL, [DepositDate] [smalldatetime] NULL, [NameOfFile] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [DepositValue] [decimal](11, 2) NULL, [RecordStatus] [char](1) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Trn_PostingGroups_1] PRIMARY KEY CLUSTERED ( [ControlGroup] ASC, [PracticeCode] ASC )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] Scenario 1 : Suppose I have a query like this... Select * from Trn_PostingGroups where PracticeCode = 'ABC' Will indexing on Practice Code seperately help me in making my query faster?? Scenario 2 : Select * from Trn_PostingGroups where ControlGroup = 12701 and PracticeCode = 'ABC' and NameOfFile = 'FileName1' Will indexing on NameOfFile seperately help me in making my query faster ??

    Read the article

  • Pulling $_GET data and creating multidimensional array using loop

    - by Chris J
    I'm creating a checkout for customers and the data about what's in their cart is being sent to a page (for just now) via $_GET. I want to extract that data and then populate a multidimensional array with it using a loop. Here's how I'm naming the data: $itemCount = $_GET['itemCount']; $i = 1; while ($i <= $itemCount) { ${'item_name_'.$i} = $_GET["item_name_{$i}"]; ${'item_quantity_'.$i} = $_GET["item_quantity_{$i}"]; ${'item_price_'.$i} = $_GET["item_price_{$i}"]; //echo "<br />Name: " .${'item_name_'.$i}. " - Quantity: " .${'item_quantity_'.$i}. " - Price: ".${'item_price_'.$i}; $i++; } From here I'd like to create a multidimensional array like such: Array ( [Item_1] => Array ( [item_name] => Shoe [item_quantity] => 2 [item_price] => 40.00 ) [Item_2] => Array ( [item_name] => Bag [item_quantity] => 1 [item_price] => 60.00 ) [Item_3] => Array ( [item_name] => Parrot [item_quantity] => 4 [item_price] => 90.00 ) . . . ) What I'd like to know is if there is a way I can create this array in the existing while loop? I'm aware of being able to add data to an array like $data = [] after delacring an empty array but the actual syntax eludes me. Maybe I'm completely off the right track and there is a better way of doing it? Thanks

    Read the article

  • How to interpret Google's "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • C++ code which is slower than its C equivalent?

    - by user997112
    Are there any aspects to the C++ programming language where the code is known to be slower than the equivalent C language? Obviously this would be excluding the OO features like virtual functions and vtable features etc. I am wondering whether, when you are programming in a latency-critical area (and you aren't worried about OO features) whether you could stick with basic C++ or would C be better?

    Read the article

  • mysql data read returning 0 rows from class

    - by Neo
    I am implementing a database manager class within my app, mainly because there are 3 databases to connect to one being a local one. However the return function isn't working, I know the query brings back rows but when it is returned by the class it has 0. What am I missing? public MySqlDataReader localfetchrows(string query, List<MySqlParameter> dbparams = null) { using (var conn = connectLocal()) { Console.WriteLine("Connecting local : " + conn.ServerVersion); MySqlCommand sql = conn.CreateCommand(); sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); return reader; /* using (MySqlCommand sql = conn.CreateCommand()) { sql.CommandText = query; if (dbparams != null) { if (dbparams.Count > 0) { sql.Parameters.AddRange(dbparams.ToArray()); } } MySqlDataReader reader = sql.ExecuteReader(); Console.WriteLine("Reading data : " + reader.HasRows + reader.FieldCount); sql.Parameters.Clear(); return reader; }*/ } } And the code to get the results query = @"SELECT jobtypeid, title FROM jobtypes WHERE active = 'Y' ORDER BY title ASC"; //parentfrm.jobtypes = db.localfetchrows(query); var rows = db.localfetchrows(query); Console.WriteLine("Reading data : " + rows.HasRows + rows.FieldCount); while (rows.Read()){ } These scripts return the following : Connecting local : 5.5.16 Reading data : True2 Reading data : False0

    Read the article

  • Better understanding of my SQL transactions

    - by Slew Poke
    I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware. Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >