Search Results

Search found 16787 results on 672 pages for 'mod disk cache'.

Page 625/672 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • problem in working with thread

    - by Xaver
    Ihave the tree view in which i have file system of logical disk. When user select some files and folders and press button programm evaluate the size of selected files and folders. this function may takes a long time. i decide do thread which will run this function. This function works with array of TreeNode. but then i want to now was it node expaned or not compiler say: "attempt to access control "treeview1" not from the thread, in which it was created." Why it appeared? Next code is show how i create array of Nodes which i send to new thread: void frmMain::FillSelected(TreeNode^ a, array<TreeNode^>^ *Paths) { if (a->Parent == nullptr) { for(int j = 0;j < a->Nodes->Count;j++) { if ((a->Nodes[j]->ImageIndex == 1)&&(a->Nodes[j]->Checked==true)) { (*Paths)->Resize((*Paths), (*Paths)->Length + 1); (*Paths)[(*Paths)->Length-1] = a->Nodes[j]; } } } for(int i = 0;i < a->Nodes->Count;i++) { if (a->Parent == nullptr) { FillSelected(a->Nodes[i], Paths); } else { if(a->Nodes[i]->Checked == true) { (*Paths)->Resize((*Paths), (*Paths)->Length + 1); (*Paths)[(*Paths)->Length-1] = a->Nodes[i]; } if ((a->Nodes[i]->Nodes->Count > 0)&&(a->Nodes[i]->Nodes[0]->FullPath != (a->Nodes[i]->FullPath + "\\"))) { FillSelected(a->Nodes[i], Paths); } } } return; }

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Failure remediation strategy for File I/O

    - by Brett
    I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

    Read the article

  • Multi-threaded library calls in ASP.NET page request.

    - by ProfK
    I have an ASP.NET app, very basic, but right now too much code to post if we're lucky and I don't have to. We have a class called ReportGenerator. On a button click, method GenerateReports is called. It makes an async call to InternalGenerateReports using ThreadPool.QueueUserWorkItem and returns, ending the ASP.NET response. It doesn't provide any completion callback or anything. InternalGenerateReports creates and maintains five threads in the threadpool, one report per thread, also using QueueUserWorkItem, by 'creating' five threads, also with and waiting until calls on all of them complete, in a loop. Each thread uses an ASP.NET ReportViewer control to render a report to HTML. That is, for 200 reports, InternalGenerateReports should create 5 threads 40 times. As threads complete, report data is queued, and when all five have completed, report data is flushed to disk. My biggest problems are that after running for just one report, the aspnet process is 'hung', and also that at around 200 reports, the app just hangs. I just simplified this code to run in a single thread, and this works fine. Before we get into details like my code, is there anything obvious in the above scendario that might be wrong?

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Objective-C / Cocoa: Uploading Images, Working Memory, And Storage.

    - by Finley Still
    Hello. I'm in the process of porting an application originally in java to cocoa, but I'm rewriting it to make it much better, since I prefer cocoa a lot anyway. One of the problems I had in the application, was that when you uploaded images to it, I had the images created, (as say an NSImage object) and then I just had them sitting in memory, the more I uploaded the more memory they took up, and I ended up running out of memory. My question is this: if I am going to have users upload images to this application in cocoa, how should I go about storing them? I don't just want to copy the file paths, because I want what is saved to contain the images, etc. Is there any way to upload an image and copy it into a different place only for my application? Then load that image with the new path name as needed? Only I would like it all to be consolidated. I'm going to implement saving by archiving one "master" object into an NSData*- so I'd like the images to be saved with that. Is there a temporary location maybe where I could write the images to disk for my application, and then when I saved, they would all be archived into a single file? Also, how do I do this? Thanks.

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • i have problem with sharing a folder through programming using c#?

    - by moon
    here is my code it shares the folder but that does not work correctly when i want to access it , it shows access denied help required, private static void ShareFolder(string FolderPath, string ShareName, string Description) { try { // Create a ManagementClass object ManagementClass managementClass = new ManagementClass("Win32_Share"); // Create ManagementBaseObjects for in and out parameters ManagementBaseObject inParams = managementClass.GetMethodParameters("Create"); ManagementBaseObject outParams; // Set the input parameters inParams["Description"] = Description; inParams["Name"] = ShareName; inParams["Path"] = FolderPath; inParams["Type"] = 0x0; // Disk Drive //Another Type: //DISK_DRIVE = 0x0; //PRINT_QUEUE = 0x1; //DEVICE = 0x2; //IPC = 0x3; //DISK_DRIVE_ADMIN = 0x80000000; //PRINT_QUEUE_ADMIN = 0x80000001; //DEVICE_ADMIN = 0x80000002; //IPC_ADMIN = 0x8000003; //inParams["MaximumAllowed"] = int maxConnectionsNum; // Invoke the method on the ManagementClass object outParams = managementClass.InvokeMethod("Create", inParams, null); // Check to see if the method invocation was successful if ((uint)(outParams.Properties["ReturnValue"].Value) != 0) { throw new Exception("Unable to share directory. Because Directory is already shared or directory not exist"); }//end if }//end try catch (Exception ex) { MessageBox.Show(ex.Message, "error!"); }//end catch }//End Method

    Read the article

  • AssemblyResolve event is not firing during compilation of a dynamic assembly for an aspx page.

    - by John
    This one is really pissing me off. Here goes: My goal is to load assemblies at run-time that contain embedded aspx,ascx etc. What I would also like is to not lock the assembly file on disk so I can update it at run-time without having to restart the application (I know this will leave the previous version(s) loaded). To that end I have written a virtual path provider that does the trick. I have subscribed to the CurrentDomain.AssemblyResolve event so as to redirect the framework to my assemblies. The problem is that the when the framework tries to compile the dynamic assembly for the aspx page I get the following: Compiler Error Message: CS0400: The type or namespace name 'Pages' could not be found in the global namespace (are you missing an assembly reference?) Source Error: public class app_resource_pages__version_1_0_0_0__culture_neutral__publickeytoken_null_default_aspx : global::Pages._Default, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandle I noticed that if I load the assembly with Assembly.Load(AssemblyName) or Assembly.LoadFrom(filename) I dont get the above error. If I load it with Assembly.Load(byte[]) (so as to not lock it), the exception is thrown but my AssemblyResolve handler, when called is returning the assembly correctly (it is called once). So I am guessing that it is called once when the framework parses the asp markup but not when it tries to create the dynamic assembly for the aspx page.

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • Developping an online music store

    - by Simon
    We need to develop an application to sell music online. No need to specify that all will be done quite legally and in so doing, we have to plan an interface to pay artists. However, we are confronted with a question: What is the best way to store music on the server? Should we save it on server's disk from a HTTP fileupload? Should we save via FTP or would it be wiser to save it in the database? No need to say that we need it to be the most safiest as possible. So maybe an https is required here. But, we what you think is the best way? Maybe other idea? Because in all HTTP case, upload songs (for administration) is quite long and boring, but easly linkable to a song that admin create in his web application comparativly to an FTP application to upload song on server and then list directory in admin part to link the correct uploaded song to the song informations in database. I know that its maybe not quite clear, it's because i'm french but tell me and I will try to explain part that you don't understand.

    Read the article

  • IIS 6+ASP.NET - many temp files generated

    - by moshe_ravid
    I have a ASP.NET + some .NET web-services running on IIS 6 (win 2003 server). The issue is that IIS is generating a lot (!) of files in "c:\WINDOWS\Temp" directory. a lot of files means thousands of files, which get to more than 3G of size so far. The files are generated by this command: C:\WINDOWS\SysWOW64\inetsrv "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\csc.exe" /t:library /utf8output /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\vfagt\113819dd\db0d5802\assembly\dl3\fedc6ef1\006e24d8_3bc9ca01\VfAgentWService.DLL" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Web.Services\2.0.0.0__b03f5f7f11d50a3a\System.Web.Services.dll" /R:"C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\mscorlib.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System.Xml\2.0.0.0__b77a5c561934e089\System.Xml.dll" /R:"C:\WINDOWS\assembly\GAC_MSIL\System\2.0.0.0__b77a5c561934e089\System.dll" /out:"C:\WINDOWS\TEMP\9i_i2bmg.dll" /debug- /optimize+ /nostdlib /D:_DYNAMIC_XMLSERIALIZER_COMPILATION "C:\WINDOWS\TEMP\9i_i2bmg.0.cs" The files in the temp directory are pairs of *.out & *.err, where the *.err file is zero size, and the *.out file contains the compilation output messages. What is causing IIS to generate so many files? How can I prevent it? UPDATE: The problem is that the command i described above (csc.exe) is being executed many (many) times, causing the .out & .err to be generated so many times, until it consumes the disk space. So - my question is: what is causing this command to run so many times? (i don't have that many .aspx & .asmx files in my web app). Thanks, Moe

    Read the article

  • What are the attributes that are need to Send with this FEDEX shipping script?

    - by Fero
    could any one please tell me what are the attributes that are send with $ship_data ? I am too confused about it. I know that ORIGIN ADDRESS, DESTINATION ADDRESS, NAME etc.. need to be send. But how the data should be aligned. Like name is FIRST, address is SECOND etc... Any help will be useful ** // create new fedex object $fed = new FedExDC('#########','#########'); $ship_data = array( 75= 'LBS', 16= 'Ma' , 13= '44 Main street' , 5= '312 stuart st', 1273= '01', 1274= '01', 18= '6173335555', 15= 'Boston', 23= '1', 9= '02134', 183= '6175556985', 8= 'MA', 117= 'US', 17= '02116', 50= 'US', 4= 'Vermonster LLC', 7= 'Boston', 1369= '1', 12= 'Jay Powers', 1333= '1', 1401= '1.0', 116= 1, 68= 'USD', 1368= 1, 1369= 1, 1370= 5 ); // Ship example $ship_Ret = $fed-ship_express($ship_data); if ($error = $fed-getError()) { echo "ERROR :". $error; } else { // Save the label to disk $fed-label('mylabel.png'); } /* tracking example $track_Ret = $fed-track( array( 29 = 790344664540, )); echo $fed-debug_str. "\n"; echo "Price ".$ship_Ret[1419];*/ echo ""; if ($error = $fed-getError()) { die("ERROR: ". $error); } else { // decode and save label $fed-label('myLabel.png'); echo $fed-debug_str. "\n"; echo "\n\n"; echo "Price $".$ship_Ret[1419]; echo "\n"; echo "Tracking# ".$ship_Ret[29]; } echo ""; ?** Thanks Fero

    Read the article

  • How to install previously-archived apps from xcode organizer to my iphone

    - by Ben Clayton
    Hi all. Xcode keeps an archive of all the versions of my apps that I've submitted to the app store in the 'archived applications' section. I assumed using this I could install an old version of an app to my device, in order to reproduce any problems my client may have had with that particular version. However, when I try to do this I get an error: 'this executable was signed with invalid entitlements, the entitlements specified in your applications code signing entitlements do not match those specified in your provisioning profile' The original app was signed using our App Store distribution certificate, and I use the Organizer interface to re-sign it using our Developer profile. select the archived app select the version I want to test click 'share' select 'iphone developer' next to identity save to disk (saves the ipa file) then copy the ipa to the device using the little + button you see next to 'applications' on the screen you get when you select the connected device. Then I get the error, and the app isn't installed. Is there something obvious I'm doing wrong here? Or is there a different process to re-install an archived app to my device? Thanks,

    Read the article

  • Program to format every media in c++

    - by AtoMerZ
    Problem: I'm trying to write a program that formats any type of media. So far I've managed to format Hard disk partitions, flash memories, SDRAM, RDX. But there's this last type of media (DVD-RAM) I need to format. My program fails to format this media. I'm using the FormatEx function in fmifs.dll. I had absolutely no idea how to use this function Except for its name and that it resides in fmifs.dll. with the help of this I managed to find a simple program that uses this library. Yet still it doesn't give complete information about how to use it. What I've tried: I'm looking for a full documentation about FormatEx, its parameters, and exactly what values each parameter can take. I tried searching on google and MSDN. This is what I found. First of all this isn't the function I'm working with. But even putting that aside there's not enough information on how to use the function (Like which headers/libraries to use). EDIT: I don't have to use FormatEx if there's an alternative to use please tell.

    Read the article

  • Why does File.Exists return false?

    - by Jonas Stawski
    I'm querying all images on the Android device as such: string[] columns = { MediaStore.Images.Media.InterfaceConsts.Data, MediaStore.Images.Media.InterfaceConsts.Id }; string orderBy = MediaStore.Images.Media.InterfaceConsts.Id; var imagecursor = ManagedQuery(MediaStore.Images.Media.ExternalContentUri, columns, null, null, orderBy); for (int i = 0; i < this.Count; i++) { imagecursor.MoveToPosition(i); Paths[i]= imagecursor.GetString(dataColumnIndex); Console.WriteLine(Paths[i]); Console.WriteLine(System.IO.File.Exists(Paths[i])); } The problem is that the output shows that some files don't exist. Here's a sample output: /storage/sdcard0/Download/On-Yom-Kippur-Jews-choose-different-shoes-VSETQJ6-x-large.jpg False /storage/sdcard0/Download/397277_10151250943161341_876027377_n.jpg False /storage/sdcard0/Download/Roxy_Cottontail_&_Melo-X_Present..._Some_Bunny_Love's_You.jpg False /storage/sdcard0/Download/album-The-Rolling-Stones-Some-Girls.jpg True /storage/sdcard0/Download/some-people-ust-dont-appreciate-fashion[1].jpg True /storage/sdcard0/Download/express.gif True ... /storage/sdcard0/Download/some-joys-are-expressed-better-in-silence.JPG False How is this possible? I downloaded these images myself from the internet! They should exist in disk.

    Read the article

  • How do I keep my DataService up to date with ObservableCollection?

    - by joebeazelman
    I have a class called CustomerService which simply reads a collection of customers from a file or creates one and passes it back to the Main Model View where it is turned into an ObservableCollection. What the best practice for making sure the items in the CustomerService and ObservableCollection are in sync. I'm guessing I could hookup the CustomerService object to respond to RaisePropertyChanged, but isn't this only for use with WPF controls? Is there a better way? using System; public class MainModelView { public MainModelView() { _customers = new ObservableCollection<CustomerViewModel>(new CustomerService().GetCustomers()); } public const string CustomersPropertyName = "Customers" private ObservableCollection<CustomerViewModel> _customers; public ObservableCollection<CustomerViewModel> Customers { get { return _customers; } set { if (_customers == value) { return; } var oldValue = _customers; _customers = value; // Update bindings and broadcast change using GalaSoft.MvvmLight.Messenging RaisePropertyChanged(CustomersPropertyName, oldValue, value, true); } } } public class CustomerService { /// <summary> /// Load all persons from file on disk. /// </summary> _customers = new List<CustomerViewModel> { new CustomerViewModel(new Customer("Bob", "" )), new CustomerViewModel(new Customer("Bob 2", "" )), new CustomerViewModel(new Customer("Bob 3", "" )), }; public IEnumerable<LinkViewModel> GetCustomers() { return _customers; } }

    Read the article

  • How do you fix loading plugins in eclipse 3.5.1 on linux?

    - by Jay R.
    I have two linux boxes. Both Fedora 11 x64. On one, I downloaded the eclipse-java-galileo-SR1-linux-gtk-x86_64.tar.gz. I unpacked it to /opt/eclipse-3.5.1/ and used the Install New Software... item to install the SVN team provider and the Polarion SVN connectors. Everything works. On the second, I copied the tar.gar for eclipse there, and then tried to follow the same steps. When I get to the install SVN team provided, eclipse downloads it and claims to install it and asks to restart. I restart and there is no SVN support. The software installer knows its there because I can't reinstall it without uninstalling it. So the questions: Why isn't the plugin/feature loading for the SVN Team Support? Is there a checkbox that I forgot about that enables the plugin? Is there a command line option that will force reload all of the features on the disk? I've tried to install other things like findbugs, but I get the same result. I have no messages in the log file indicating an exception or anything like that.

    Read the article

  • ASP.NET web form Routing issue via UNC Path

    - by Slash
    I create a IIS 7.0 website via UNC path to load .aspx to dynamic compile files and runs. however, it's running perfect. I always use IIS URL Rewrite module 2 to rewrite my site URL n' its perfect, too. Today, I wanna use System.Web.Routing to implement url rewrite but I encountered difficulties... When I wrote code in Global.asax: System.Web.Routing.RouteTable.Routes.MapPageRoute("TEST", "AAA/{prop}", "~/BBB/CCC.aspx"); And it just CANNOT reDirect to /BBB/CCC.aspx When I type the URL(like: xx.xx.xx.xx/BBB/CCC.aspx) in browser directly, it runs normally that I want. (so it proof CCC.aspx is in right path.) thus, I copy all of the code and open VS2010 running with IIS 7.5 Express locally, it works perfect! e.g: in browser URL I type xx.xx.xx.xx/AAA/1234, it will turn to page xx.xx.xx.xx/BBB/CCC.aspx (Works perfect!) Why??? help me plz. thanks. Update: I think I should consider not UNC path to make it error! when I move all code to physical disk and setup IIS 7.0 to monitor this Folder, it still not works! But the same code run in VS2010 + IIS 7.5 Express it works!? so strange!

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • Shrinking the transaction log of a mirrored SQL Server 2005 database

    - by Peter Di Cecco
    I've been looking all over the internet and I can't find an acceptable solution to my problem, I'm wondering if there even is a solution without a compromise... I'm not a DBA, but I'm a one man team working on a huge web site with no extra funding for extra bodies, so I'm doing the best I can. Our backup plan sucks, and I'm having a really hard time improving it. Currently, there are two servers running SQL Server 2005. I have a mirrored database (no witness) that seems to be working well. I do a full backup at noon and at midnight. These get backed up to tape by our service provider nightly, and I burn the backup files to dvd weekly to keep old records on hand. Eventually I'd like to switch to log shipping, since mirroring seems kinda pointless without a witness server. The issue is that the transaction log is growing non-stop. From the research I've done, it seems that I can't truncate a log file of a mirrored database. So how do I stop the file from growing!? Based on this web page, I tried this: USE dbname GO CHECKPOINT GO BACKUP LOG dbname TO DISK='NULL' WITH NOFORMAT, INIT, NAME = N'dbnameLog Backup', SKIP, NOREWIND, NOUNLOAD GO DBCC SHRINKFILE('dbname_Log', 2048) GO But that didn't work. Everything else I've found says I need to disable the mirror before running the backup log command in order for it to work. My Question (TL;DR) How can I shrink my transaction log file without disabling the mirror?

    Read the article

  • ActiveX won't run from server

    - by user1709555
    I have a MFC activeX that runs fine from disk but when I put it on a server I get errors. Client: WIN7 machine Server: Ubunto running apache The HTML and the errors are below, please advice. 10xs, Nahum HTML: <html> <HEAD> <TITLE>myFirstOCX.CAB</TITLE> <script type="text/javascript" FOR="window"> function fn() { try{ document.all('Ctrl1').AboutBox();//error: Undifiend : object doesn't have AboutBox() method //OR var obj = new ActiveXObject ("activex.activexCtrl"); obj.AboutBox ();//error: Undifiend : Automation server can't create object } catch (ex) { alert("Error: " + ex.Description + " : " + ex.message); } } </script> </HEAD> <body bgcolor=lightblue > <TABLE BORDER> <TR> <TD><OBJECT CLASSID="CLSID:E228C560-FA68-48E6-850F-B1167515C920" CODEBASE=".\nsip.CAB#version=1,0,0,1" ID="Ctrl1" name="Ctrl1"> </OBJECT> </TD> </TR> <TR> <TD ALIGN="CENTER"> <INPUT TYPE=BUTTON VALUE="Click Me" onclick="fn()" > </TD> </TR> </TABLE> <INPUT TYPE=TEXT ID="ConnectionString" VALUE="" > </body> </html>

    Read the article

  • initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure)

    - by mazgalici
    2 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.0 (5.0.32-7etch12) ... Stopping MySQL database server: mysqld. Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >