Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 65/79 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Ruby 1.9 GarbageCollector, GC.disable/enable

    - by seb
    I'm developing a Rails 2.3, Ruby 1.9.1 webapplication that does quite a bunch of calculation before each request. For every request it has to calculate a graph with 300 nodes and ~1000 edges. The graph and all its nodes, edges and other objects are initialized for every request (~2000 objects) - actually they are cloned from an uncalculated cached graph using Marshal.load(Marshal.dump()). Performance is quite an issue here. Right now the whole request takes in average 150ms. I then saw that during a request, parts of the calculation randomly take longer. Assuming, that this might be the GarbageCollector kicking in, I wrapped the request in GC.disable and GC.enable, so that the request waits with garbagecollecting until calculating and rendering have finished. def query GC.disable calculate respond_to do |format| format.html {render} end GC.enable end The average request now takes about 100ms (50 ms less). But I'm unsure if this is a good/stable solution, I assume there must be drawbacks doing that. Does anybody has experience with a similar problem or sees problems with the above code?

    Read the article

  • Access Violation in std::pair

    - by sameer karjatkar
    I have an application which is trying to populate a pair. Out of nowhere the application crashes. The Windbg analysis on the crash dump suggests: PRIMARY_PROBLEM_CLASS: INVALID_POINTER_READ DEFAULT_BUCKET_ID: INVALID_POINTER_READ STACK_TEXT: 0389f1dc EPFilter32!std::vector<std::pair<unsigned int,unsigned int>,std::allocator<std::pair<unsigned int,unsigned int> > >::size+0xc INVALID_POINTER_READ_c0000005_Test.DLL!std::vector_std::pair_unsigned_int, unsigned_int_,std::allocator_std::pair_unsigned_int,unsigned_int_____::size Following is the code snap in the code where it fails: for (unsigned i1 = 0; i1 < size1; ++i1) { for (unsigned i2 = 0; i2 < size2; ++i2) { const branch_info& b1 = en1.m_branches[i1]; //Exception here :crash const branch_info& b2 = en2.m_branches[i2]; } } where branch_info is std::pair<unsigned int,unsigned int> and the en1.m_branches[i1] fetches me a pair value.

    Read the article

  • How to Capture a live stream from Windows Media Server 2008 using c#.net

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. please help me with this. Thanks [Reference] Ref1=http://mywindowsmediaserver/test?MSWMExt=.asf Ref2=http://mywindowsmediaserver/test?MSWMExt=.asf FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • Ruby -- looking for some sort of "Regexp unescape" method

    - by RubyNoobie
    I have a bunch of strings that appear to have been double-escaped -- eg, I have "\\014\"\\000\"\\016smoothing\"\\011mean\"\\022color\"\\011zero@\\016" but I want "\014"\000"\016smoothing"\011mean"\022color"\011zero@\016" Is there a method I can use to unescape them? I imagine that I could make a regex to remove 1 backslash from every consecutive n backslashes, but I don't have a lot of regex experience and it seems there ought to be a "more elegant" way to do it. For example, when I puts MyString it displays the output I'd like, but I don't know how I might capture that into a variable. Thanks! Edited to add context: I have this class that is being used to marshal / restore some stuff, but when I restore some old strings it spits out a type error which I've determined is because they weren't -- for some inexplicable reason -- stored as base64. They instead appear to be 'double-escaped', when I need them to be 'single-escaped' to get restored. require 'base64' class MarshaledStuff < ActiveRecord::Base validates_presence_of :marshaled_obj def contents obj = self.marshaled_obj return Marshal.restore(Base64.decode64(obj)) end def contents=(newcontents) self.marshaled_obj = Base64.encode64(Marshal.dump(newcontents)) end end

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • Is it OK to use WPF assemblies in a web app?

    - by Chris
    I have an ASP.NET MVC 2 app targeting .NET 4 that needs to be able to resize images on the fly and write them to the response. I have code that does this and it works. I am using System.Drawing.dll. However, I want to enhance my code so that not only am I resizing the image, but I am dropping it from 24bpp down to 4bit grayscale. I could not, for the life of me, find code on how to do this with System.Drawing.dll. But I did find a bunch of WPF stuff. This is my working/sample code (runs in LinqPad). // Load the original 24 bit image var bitmapImage = new BitmapImage(); bitmapImage.BeginInit(); bitmapImage.UriSource = new Uri(@"C:\Temp\Resized\18_appa2_015.png", UriKind.Absolute); //bitmapImage.DecodePixelWidth = 600; bitmapImage.EndInit(); // Create the destination image var formatConvertedBitmap = new FormatConvertedBitmap(); formatConvertedBitmap.BeginInit(); formatConvertedBitmap.Source = bitmapImage; formatConvertedBitmap.DestinationFormat = PixelFormats.Gray4; formatConvertedBitmap.EndInit(); // Encode and dump the image to disk var encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(formatConvertedBitmap)); using (var fileStream = File.Create(@"C:\Temp\Resized\18_appa2_015_s2.png")) { encoder.Save(fileStream); } It uses System.Xaml.dll, WindowsBase.dll, PresentationCore.dll, and PresentationFramework.dll. The namespaces used are: System.Windows.Controls, System.Windows.Media, and System.Windows.Media.Imaging. Is there any problem using these namespaces in my web application? It doesn't seem right. If anyone knows how to drop the bit depth without all this WPF stuff (which I barely understand, BTW) I would be thrilled to see that too.

    Read the article

  • Storing arbitrary data in HTML

    - by Rob Colburn
    What is the best way to embed data in html elements for later use? As an example, let's say we have jQuery returning some JSON from the server, and we want to dump that datat out to the user as paragraphs. However, we want to be able to attach meta-data to these elements, so we can events for these later. The way I tend to handle this, is with some ugly prefixing function handle_response(data) { var html = ''; for (var i in data) { html += '<p id="prefix_' + data[i].id + '">' + data[i].message + '</p>'; } jQuery('#log').html(html).find('p').click(function(){ alert('The ID is: ' + $(this).attr('id').substr(7)); }); } Alternatively, one can build a Form in the paragraph, and store your meta-data there. But, that often feels like overkill. This has been asked before in different ways, but I do not feel it's been answered well: http://stackoverflow.com/questions/432174/how-to-store-arbitrary-data-for-some-html-tags http://stackoverflow.com/questions/209428/non-standard-attributes-on-html-tags-good-thing-bad-thing-your-thoughts

    Read the article

  • php parsing speed optimization

    - by Arnaud
    I would like to add tooltip or generate link according to the element available in the database, for exemple if the html page printed is: to reboot your linux host in single-user mode you can ... I will use explode(" ", $row[page]) and the idea is now to lookup for every single word in the page to find out if they have a related referance in this exemple let's say i've got a table referance an one entry for reboot and one for linux reboot: restart a computeur linux: operating system now my output will look like (replaced < and by @) to @a href="ref/reboot"@reboot@/a@ your @a href="ref/linux"@linux@/a@ host in single-user mode you can ... Instead of have a static list generated when I saved the content, if I add more keyword in the future, then the text will become more interactive. My main concerne and question is how can I create a efficient enough process to do it ? Should I store all the db entry in an array and compare them ? Do an sql query for each word (seems to be crazy) Dump the table in a file and use a very long regex or a "grep -f pattern data" way of doing it? Or or or or I'm sure it must be a better way of doing it, just don't have a clue about it, or maybe this will be far too resource un-friendly and I should avoid doing such things. Cheers!

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • CLI include paths to run zend framework via cron

    - by summerg
    I wrote a command line utility using Zend Framework to do some nightly reporting. It uses a ton of the same functionality the accompanying site. It works great when I run it by hand, but when I run it on cron I have include path issues. Seems like it should be easily fixed with set_include_path, but maybe I'm missing something? My directory structure looks like this: /var/www/clientname/ application Globals.php commandline commandline_bootstrap.php public_html public_bootstrap.php library Zend In public_bootstrap.php I use set_include_path without a problem, relative to the current directory: set_include_path('../library' . PATH_SEPARATOR . get_include_path()); If I understand correctly, in commandline_bootstrap.php I need to put in the absolute path, so cron knows where everything is. My file starts like this: error_reporting(E_ALL); set_include_path('/var/www/clientname/library' . PATH_SEPARATOR . get_include_path()); require_once "../application/Globals.php"; But when I run it via cron I get the following error: PHP Fatal error: require_once(): Failed opening required '../application/Globals.php' (include_path='/var/www/clientname/library/') in /var/www/clientname/commandline/zfcli.php on line 11 I think PHP is accepting my new path, because when I run it command line and dump the phpinfo I can see: include_path = /var/www/clientname/library/:.:/usr/share/pear:/usr/share/php = .:/usr/share/pear:/usr/share/php I admit the syntax here looks a little strange, but I can’t figure out how to fix it. Any suggestions would be greatly appreciated. Thanks summer

    Read the article

  • Android application listed as compatible with Sony Xperia S but still filtered from google play

    - by mlidal
    I have published an Android application and some users are complaining that it is listed as not compatible with Sony Xperia S. According to the developer console Xperia S (LT26i) is listed as compatible. Do anyone know of any reason why the app is still filtered from google play? I have seen people reporting problems with big apk files. This app is about 20Mb in size, with the largest file being 14Mb. Quite a bit but not enough to cause problems I think... Here is the output from aapt dump badging: package: name='no.bouvet.nrkut' versionCode='4' versionName='1.0' sdkVersion:'4' targetSdkVersion:'13' uses-permission:'android.permission.ACCESS_FINE_LOCATION' uses-permission:'android.permission.ACCESS_COARSE_LOCATION' uses-permission:'android.permission.ACCESS_WIFI_STATE' uses-permission:'android.permission.ACCESS_NETWORK_STATE' uses-permission:'android.permission.INTERNET' uses-permission:'android.permission.WRITE_EXTERNAL_STORAGE' application-label:'UT.no' application-icon-120:'res/drawable-ldpi/utno_launcher.png' application-icon-160:'res/drawable-mdpi/utno_launcher.png' application-icon-240:'res/drawable-hdpi/utno_launcher.png' application-icon-320:'res/drawable-xhdpi/utno_launcher.png' application: label='UT.no' icon='res/drawable-mdpi/utno_launcher.png' launchable-activity: name='no.bouvet.nrkut.MainActivity' label='UT.no' icon='' uses-feature:'android.hardware.location' uses-feature:'android.hardware.location.gps' uses-feature:'android.hardware.location.network' uses-feature:'android.hardware.wifi' uses-feature:'android.hardware.touchscreen' uses-feature:'android.hardware.screen.portrait' main other-activities search supports-screens: 'small' 'normal' 'large' 'xlarge' supports-any-density: 'true' locales: '--_--' densities: '120' '160' '240' '320'

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • reading non-english html pages with c#

    - by Gal Miller
    I am trying to find a string in Hebrew in a website. The reading code is attached. Afterward I try to read the file using streamReader but I can't match strings in other languages. what am I suppose to do? // used on each read operation byte[] buf = new byte[8192]; // prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.webPage.co.il"); // execute the request HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // we will read data via the response stream Stream resStream = response.GetResponseStream(); string tempString = null; int count = 0; FileStream fileDump = new FileStream(@"c:\dump.txt", FileMode.Create); do { count = resStream.Read(buf, 0, buf.Length); fileDump.Write(buf, 0, buf.Length); } while (count > 0); // any more data to read? fileDump.Close();

    Read the article

  • DLL Exports: not all my functions are exported

    - by carmellose
    I'm trying to create a Windows DLL which exports a number of functions, howver all my functions are exported but one !! I can't figure it out. The macro I use is this simple one : __declspec(dllexport) void myfunction(); It works for all my functions except one. I've looked inside Dependency Walker and here they all are, except one. How can that be ? What would be the cause for that ? I'm stuck. Edit: to be more precise, here is the function in the .h : namespace my { namespace great { namespace namespaaace { __declspec(dllexport) void prob_dump(const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ); }}} And in the .cpp file it goes like this: namespace my { namespace great { namespace namespaaace { namespace { void dump_mtx(std::ostream& ostr, const double *mtx, int rows, int cols, const char *ind = 0) { /* some random code there, nothing special, no statics whatsoever */ } } // end anonymous namespace here // dump the problem specification into a file void prob_dump( const char *filename, const double p[], int nx, const double Q[], const double xlow[], const char ixlow[], const double xupp[], const char ixupp[], const double A[], int my, const double bA[], const double C[], int mz, const double clow[], const char iclow[], const double cupp[], const char icupp[] ) { std::ofstream fout; fout.open(filename, std::ios::trunc); /* implementation there */ dump_mtx(fout, Q, nx, nx); } }}} Thanks

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • How to use keyword this in a mouse wrapper in right context in Javascript?

    - by MartyIX
    Hi, I'm trying to write a simple wrapper for mouse behaviour. This is my current code: function MouseWrapper() { this.mouseDown = 0; this.OnMouseDownEvent = null; this.OnMouseUpEvent = null; document.body.onmousedown = this.OnMouseDown; document.body.onmouseup = this.OnMouseUp; } MouseWrapper.prototype.Subscribe = function (eventName, fn) { // Subscribe a function to the event if (eventName == 'MouseDown') { this.OnMouseDownEvent = fn; } else if (eventName == 'MouseUp') { this.OnMouseUpEvent = fn; } else { alert('Subscribe: Unknown event.'); } } MouseWrapper.prototype.OnMouseDown = function () { this.mouseDown = 1; // Fire event $.dump(this.OnMouseDownEvent); if (this.OnMouseDownEvent != null) { alert('test'); this.OnMouseDownEvent(); } } MouseWrapper.prototype.OnMouseUp = function () { this.mouseDown = 0; // Fire event if (this.OnMouseUpEvent != null) { this.OnMouseUpEvent(); } } From what I gathered it seems that in MouseWrapper.prototype.OnMouseUp and MouseWrapper.prototype.OnMouseDown the keyword "this" doesn't mean current instance of MouseWrapper but something else. And it makes sense that it doesn't point to my instance but how to solve the problem? I want to solve the problem properly I don't want to use something dirty. My thinking: * use a singleton pattern (mouse is only one after all) * pass somehow my instance to OnMouseDown/Up - how? Thank you for help!

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Are there more secure alternatives to the .Net SQLConnection class?

    - by KeyboardMonkey
    Hi SO people, I'm very surprised this issue hasn't been discussed in-depth: This article tells us how to use windbg to dump a running .Net process strings in memory. I spent much time researching the SecureString class, which uses unmanaged pinned memory blocks, and keeps the data encrypted too. Great stuff. The problem comes in when you use it's value, and assign it to the SQLConnection.ConnectionString property, which is of the System.String type. What does this mean? Well... It's stored in plain text Garbage Collection moves it around, leaving copies in memory It can be read with windbg memory dumps That totally negates the SecureString functionality! On top of that, the SQLConnection class is non-inheritable, I can't even roll my own with a SecureString property instead; Yay for closed-source. Yay. A new DAL layer is in progress, but for a new major version and for so many users it will be at least 2 years before every user is upgraded, others might stay on the old version indefinitely, for whatever reason. Because of the frequency the connection is used, marshalling from a SecureString won't help, since the immutable old copies stick in memory until GC comes around. Integrated Windows security isn't an option, since some clients don't work on domains, and other roam and connect over the net. How can I secure the connection string, in memory, so it can't be viewed with windbg?

    Read the article

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >