Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 341/465 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • Method params match signature, but still getting error

    - by Jason
    I am in the midst of converting a VB library to C#. One of my methods has the following signature in VB: Private Shared Sub FillOrder(ByVal row As DataRowView, ByRef o As Order) In C# I converted it to: private static void FillOrder(DataRowView row, ref Order o) From my constructor inside my Order class, I am calling the FillOrder() method like so: DataView dv = //[get the data] if (dv.Count > 0) { FillOrder(dv[0], this); } In VB, this works: Dim dv As DataView = '[get data]' If dv.Count > 0 Then FillOrder(dv.Item(0), Me) End If However, in VS10 in the C# file I am getting a red squiggle under this call with the following error: The best overloaded method match for [the method] has some invalid arguments This was working code in VB. What am I doing wrong?

    Read the article

  • Different ways to specify libraries to gcc/g++

    - by abigagli
    I'd be curious to understand if there's any substantial difference in specifying libraries (both shared and static) to gcc/g++ in the two following ways (CC can be g++ or gcc) CC -o output_executable /path/to/my/libstatic.a /path/to/my/libshared.so source1.cpp source2.cpp ... sourceN.cpp vs CC -o output_executable -L/path/to/my/libs -lstatic -lshared source1.cpp source2.cpp ... sourceN.cpp I can only see a major difference being that passing directly the fully-specified library name would make for a greater control in choosing static or dynamic versions, but I suspect there's something else going on that can have side effects on how the executable is built or will behave at runtime, am I right? Andrea.

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • Mono - Could not find a 'Sub Main' in ''

    - by lampej
    I started a new solution (with multiple projects) and am trying to get it to build. Initially I was getting an internal compiler error and thought maybe it had to do with MySql, so I removed all references to MySql. Now I am getting the error "Could not find a 'Sub Main' in ''". I have made sure that all of my projects have a Main subroutine like this: Public Shared Sub Main() End Sub 2 out of the 7 projects will compile. I don't know what makes these projects different from the others, and the error message isn't very helpful. Any experience with this one?

    Read the article

  • Errors when using a Ruby On Rails scaffold with a data type of interger

    - by bublebboy
    I am learning Ruby On Rails. I am on a shared hosting with Ruby version 1.8.1 and Rails version 2.3.10. I am working my way through a tutorial at http://railstutorial.org/chapters/a-demo-app?version=2.3#top and at one point in the tutorial it has me run: script/generate scaffold Micropost content:string user_id:interger The tutorial is using the default database, SQLite3. The command works and I use rake db:migrate to create the database. I can view the page listing the microposts (which is empty), but when I try to add a micropost (microposts/new) I get an error undefined method `user_id' for #<Micropost:0x7f710e4988e8> After doing some testing on my own it seems I experience the problem by using the data type of interger. While I understand that using a scaffold is not the best way of building a ruby on rails, I'm just beginning and would still like to know why I am experiencing this problem to help me better understand how rails works.

    Read the article

  • The cost of finalize in .Net

    - by Jules
    (1) I've read a lot of questions about IDisposable where the answers recommend not using Finalize unless you really need to because of the process time involved. What I haven't seen is how much this cost is and how often it's paid. Every millisecond? second? hour, day etc. (2) Also, it seems to me that Finalize is handy when its not always known if an object can be disposed. For instance, the framework font class. A control can't dispose of it because it doesn't know if the font is shared. The font is usually created at design time so the user won't know to dispose it, therefore finalize kicks in to finally get rid of it when there are no references left. Is that a correct impression?

    Read the article

  • Write Local File using Adobe Air Iframe

    - by user290687
    This is driving me mad. I am creating an AIR application and everything is working great. However I would really like to have a form inside an Iframe that when the user clicks submit saves the file to local application storage directory. Right now I am able to do this and save the file with no problems when I just access the HTML page not inside an Iframe. However if I wrap the page in an iframe and hit submit the file does not save. Any code examples would be very much appreciated. When I am using the iframe my code looks as follows<iframe src="jobs/newjob.html" height="800px" width="800px" sandboxRoot="app:/" documentRoot="app:/sandbox"ondominitialize="setupBridge ()">

    Read the article

  • Organizing PHP includes in your development environment

    - by Andrew Heath
    I'm auditing my site design based on the excellent Essential PHP Security by Chris Shiflett. One of the recommendations I'd like to adopt is moving all possible files out of webroot, this includes includes. Doing so on my shared host is simple enough, but I'm wondering how people handle this on their development testbeds? Currently I've got an XAMPP installation configured so that localhost/mysite/ matches up with D:\mysite\ in which includes are stored at D:\mysite\includes\ In order to keep include paths accurate, I'm guess I need to replicate the server's path on my local disk? Something like D:\mysite\public_html\ Is there a better way?

    Read the article

  • How to deploy App_Data files with Azure cloud service (web role)

    - by user2977157
    I have a read-only data file (for IP geolocation) that my web role needs to read. It is currently in the App_Data folder, which is NOT included in the deployment package for the cloud service. Unlike "web deploy", there is no checkbox for an azure cloud service deployment to include/exclude App_Data. Is there a reasonable way to get the deployment package to include the App_Data folder/files? Or is using Azure storage for this sort of thing the better way to go? (cost and performance wise) Am using Visual Studio 2013 and the Azure SDK 2.2

    Read the article

  • Getting text after URL in asp.net / URL Rewriting (sort of!)

    - by alex
    My app is a very simple "one page" type app- It has Default.aspx I'm basically trying to get, for example: www.myappurl.com/this is my text I want to get hold of "this is my text" from the above example. This will be displayed on the page (for now) I didn't really want to have to use any complext url rewriting things for this... (My hosting provider uses IIS6) I tried using a 404 handler, but this is a bit long winded, and i'm using shared hosting, that can't set the "execute url" on custom 404 pages. Any other ideas?

    Read the article

  • How to share common css and other resources among grails projects?

    - by Troy
    I'm working on a grails-based web application that will be composed of a couple of different grails projects, each developed by a separate team, which will eventually all be unified under a common "portal." So they need to have the same look and feel, at least to some degree. Is there a "blessed" way to share resources like this among projects? Something using the grails plugin architecture maybe? Would it make sense to just create a separate lightweight project containing nothing but the css and any shared resources? How have the rest of you handled sharing things between different grails projects?

    Read the article

  • Any way to add an observer to the head of the queue using Element#observe?

    - by Josh
    This might not be possible but before I rewrite part of my application I wanted to ask... I have a JavaScript app which creates a submit <input> and observes that input's click event using Prototype's Element#observe function. For a few particular pages on one particular site which uses this app, I need to apply some additional business logic before the code which executes normally when the button is clicked. Is there any way I can use Elemen#observe to add my new event handler before the existing event handler, so I can stop the event if these new conditions aren't met? If not I'll probably solve this the "proper" way by having the application fire a specific beforeTakingAction event and add a listener for that which prevents the application from taking it's action, but that's more complicated than this simple problem requires, and requires rewriting part of a shared application for just one user...

    Read the article

  • Established javascript solution for secure registration & authentication without SSL

    - by Tomas
    Is there any solution for secure user registration and authentication without SSL? With "secure" I mean safe from passive eavesdropping, not from man-in-the-middle (I'm aware that only SSL with signed certificate will reach this degree of security). The registration (password setup, i.e. exchanging of pre-shared keys) must be also secured without SSL (this will be the hardest part I guess). I prefer established and well tested solution. If possible, I don't want to reinvent the wheel and make up my own cryptographic protocols. Thanks in advance.

    Read the article

  • Reporting system for organization. Architecture advise required

    - by Andrew Florko
    We have several legacy & 3'd-party systems in organization that use several RDBMS vendors (& more specific data storages). Cross-system data reporting (as well as extra-reports that are not implemented in 3'd-party systems) is required with charts and population of templates (winword, excel). Reporting system is visioned as intranet web-site with custom user access to reports. We expect ~50 reports per day. Would you suggest to use BizTalk or any other integration software if commercial-department doesn't plan to buy anything expensive. Would you suggest to create centralized data storage for reporting that is populated regularly or rely on on-demand services that providers always up-to-request data. Thank you in advance!

    Read the article

  • Where does MSBuild store it's options, and how can I change them?

    - by Neil
    We have a large project at work, under source control, including an MSBuild file to run the build. Recently, the build has stopped working on my machine (I get errors saying that 'zzz' is ambiguous in the namespace 'yyy'). The same MSBuild file is working fine on both the build server and my co-workers machines. I have tried cloning a new copy of the project from the shared repository, but even with a clean copy, the build is failing for me. I think it must be a problem with the MSBuild settings on my machine, but I haven't been able to find anything that tells me where they are. Any help would be appreciated, since I'm starting to think my machine has just gone crazy.

    Read the article

  • what mysql table structure is better

    - by Sergey
    I have very complicated search algorithm on my site, so i decided to make a table with cache or maybe all possible results. I wanna ask what structure would be better, or maybe not the one of them? (mySQL) 1) word VARCHAR, results TEXT or BLOB where i'll store ids of found objects (for example 6 chars for each id) 2) word VARCHAR, result INT, but words are not unique now i think i'll have about 200 000 rows in 1) with 1000-10000 ids each row or 200 000 000+ rows in 2) First way takes more storage memory but i think it would be much faster to find 1 unique row among 200 000, than 1000 rows among 200 mln non unique rows i think about index on word column and no sphinx. So that do YOU think? p.s. as always, sorry for my english if it's not very good.

    Read the article

  • Recommend a local LDAP store for development

    - by Paul Stovell
    Our project uses an LDAP repository for storing users. In production this will be Active Directory. For development, we seem to have a couple of options: Install an AD LDS instance that everyone uses Install an AD LDS instance on every developer machine We're trying to keep the 'F5' experience as lightweight as possible, so installing things or relying on a central AD store aren't my favorite ideas. There are other LDAP servers, like Open LDAP. I was hoping there might be an LDAP server that simply talks to an XML file. This would allow us to store the XML file in source control and have something that is fast and works. Our nightly builds would still use AD to pick up any differences, but the hope is since we're using LDAP it should Just Work. Can you recommend an LDAP implementation that works well for zero-config shared-nothing development?

    Read the article

  • Does anyone else think instance variables are problematic in database-backed applications?

    - by Ben Aston
    It occurs to me that state control in languages like C# is not well supported. By this, I mean, it is left upto the programmer to manage the state of in-memory objects. A common use-case is that instance variables in the domain-model are copies of information residing in persistent storage (i.e. the database). Clearly this violates the single point of authority principle, and "synchronisation" has to be managed by the developer. I envisage a system where instead of instance variables, we have simple public access/mutator methods marked with attributes that link them to the database, and where reads and writes are mediated by a framework that decides whether to hit the database. Does such a system exist? Am I completely missing the point, or is there some truth to this idea?

    Read the article

  • Drupal 6: using too many Views module causing site to go down cos of too many mysql connection.

    - by artmania
    Hi friends, I have HostGator Baby Shared Plan . I develop Drupal site on. everything was fine at the beginning, then by the time i go further with development, site started ti work really slow. now it is not working at all. giving my sql errors like TOO many connections, etc... I created so many blocks, pages with View. so it makes my site to so much depend on database. should not I do that? can it be the reason of my site's no working now. appreciate helps!!!!

    Read the article

  • How to add authentication property for login to directory path when running batch file in WCF?

    - by blankon91
    I have class in my WCF service to execute batch file. when I test to run the batch file in shared directory, everything is fine, the batch was executed, but when I try to run the batch file from secure diretory, I get error "ACCESS DENIED". How to add login property so I can access my secured directory to execute my batch file? here is my code: public string ExecuteBat() { string hasil = ""; ProcessStartInfo processInfo = new ProcessStartInfo(@"D:\Rpts\SSIS_WeeklyFlash_AAF_1.bat"); processInfo.CreateNoWindow = true; processInfo.UseShellExecute = false; Process process = Process.Start(processInfo); process.WaitForExit(); if (process.ExitCode == 0) { hasil = "BAT EXECUTED!"; } else { hasil = "EXECUTE BAT FAILED"; } return hasil; }

    Read the article

  • Error #2126: NetConnection object must be connected

    - by Shuo
    Hey guys , I want to count the online user,when each client login the system,it's connecting to the server and increase a variable stored in a remote shared object. But when client connecting server,problems arises:Error #2126: NetConnection object must be connected My web layout: Website --- apps --- userLogin Code snippets: rtmpnc = new NetConnection(); rtmpnc.objectEncoding = ObjectEncoding.AMF0; var uri:String = ServerConfig.getChannel("my-rtmp").endpoint + "/userLogin"; rtmpnc.connect("http://202.206.249.193:2367/userLogin"); rtmpnc.addEventListener(NetStatusEvent.NET_STATUS,onNetStatusHandler); The onNetStatusHander is defined as : switch(event.info.code) { case "NetConnection.Connect.Success":onConnSuccess();break; case "NetConnection.Connect.Failed":onConnError();break; } Could anyoue help me out?Much thanks! Best,Shuo

    Read the article

  • WCF methods sharing a dictionary

    - by YeomansLeo
    I'm creating a WCF Service Library and I have a question regarding thread-safety consuming a method inside this library, here is the full implementation that I have until now. namespace WCFConfiguration { [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, ConcurrencyMode = ConcurrencyMode.Single)] public class ConfigurationService : IConfigurationService { ConcurrentDictionary<Tuple<string,string>, string> configurationDictionary = new ConcurrentDictionary<Tuple<string,string>, string>(); public void Configuration(IEnumerable<Configuration> configurationSet) { Tuple<string, string> lookupStrings; foreach (var config in configurationSet) { lookupStrings = new Tuple<string, string>(config.BoxType, config.Size); configurationDictionary.TryAdd(lookupStrings, config.RowNumber); } } public void ScanReceived(string boxType, string size, string packerId = null) { } } } Imagine that I have a 10 values in my configurationDictionary and many people want to query this dictionary consuming ScanReceived method, are those 10 values be shared for each of the clients that request ScanReceived? Do I need to change my ServiceBehavior? The Configuration method is only consumed by one person by the way.

    Read the article

  • Shortest command to calculate the sum of a column of output on Unix?

    - by Andrew
    I'm sure there is a quick and easy way to calculate the sum of a column of values on Unix systems (using something like awk or xargs perhaps), but writing a shell script to parse the rows line by line is the only thing that comes to mind at the moment. For example, what's the simplest way to modify the command below to compute and display the total for the SEGSZ column (70300)? ipcs -mb | head -6 IPC status from /dev/kmem as of Mon Nov 17 08:58:17 2008 T ID KEY MODE OWNER GROUP SEGSZ Shared Memory: m 0 0x411c322e --rw-rw-rw- root root 348 m 1 0x4e0c0002 --rw-rw-rw- root root 61760 m 2 0x412013f5 --rw-rw-rw- root root 8192

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >