Search Results

Search found 16940 results on 678 pages for 'disk drive'.

Page 373/678 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Write asynchronously to file in perl

    - by Stefhen
    Basically I would like to: Read a large amount of data from the network into an array into memory. Asynchronously write this array data, running it thru bzip2 before it hits the disk. repeat.. Is this possible? If this is possible, I know that I will have to somehow read the next pass of data into a different array as the AIO docs say that this array must not be altered before the async write is complete. I would like to background all of my writes to disk in order as the bzip2 pass is going to take much longer than the network read. Is this doable? Below is a simple example of what I think is needed, but this just reads a file into array @a for testing. use warnings; use strict; use EV; use IO::AIO; use Compress::Bzip2; use FileHandle; use Fcntl; my @a; print "loading to array...\n"; while(<>) { $a[$. - 1] = $_; } print "array loaded...\n"; my $aio_w = EV::io IO::AIO::poll_fileno, EV::WRITE, \&IO::AIO::poll_cb; aio_open "./out", O_WRONLY || O_NONBLOCK, 0, sub { my $fh = shift or die "error while opening: $!\n"; aio_write $fh, undef, undef, $a, -1, sub { $_[0] > 0 or die "error: $!\n"; EV::unloop; }; }; EV::loop EV::LOOP_NONBLOCK;

    Read the article

  • Password Cracking Windows Accounts

    - by Kevin
    At work we have laptops with encrypted harddrives. Most developers here (on occasion I have been guilty of it too) leave their laptops in hibernate mode when they take them home at night. Obviously, Windows (i.e. there is a program running in the background which does it for windows) must have a method to unencrypt the data on the drive, or it wouldn't be able to access it. That being said, I always thought that leaving a windows machine on in hibernate mode in a non-secure place (not at work on a lock) is a security threat, because someone could take the machine, leave it running, hack the windows accounts and use it to encrypt the data and steal the information. When I got to thinking about how I would go about breaking into the windows system without restarting it, I couldn't figure out if it was possible. I know it is possible to write a program to crack windows passwords once you have access to the appropriate file(s). But is it possible to execute a program from a locked Windows system that would do this? I don't know of a way to do it, but I am not a Windows expert. If so, is there a way to prevent it? I don't want to expose security vulnerabilities about how to do it, so I would ask that someone wouldn't post the necessary steps in details, but if someone could say something like "Yes, it's possible the USB drive allows arbitrary execution," that would be great! EDIT: The idea being with the encryption is that you can't reboot the system, because once you do, the disk encryption on the system requires a login before being able to start windows. With the machine being in hibernate, the system owner has already bypassed the encryption for the attacker, leaving windows as the only line of defense to protect the data.

    Read the article

  • How do you remove a site from Sharepoint Designer?

    - by xpda
    I would like to use Sharepoint Designer 2007 as an html editor. I have a web site with a lot of files in a folder on my hard drive. I do not want Sharepoint Designer to make a web site out of this. I just want to use Sharepoint Designer to edit the html files, locally. If I ever make a mistake and click on a tool for Sites, such as summary or report, Sharepoint Designer will decide that my folder is now a web site. From that point on, Sharepoint Designer is painfully slow whenever I open a file contained in the folder that Sharepoint decided is my web site, instead of being instantaneous like it was before. I can resolve this situation by renaming the folder containing my web site -- everything gets fast again. I can also fix it by uninstalling and reinstalling Sharepoint Designer. Neither of these is a good solution. Is there a place in Sharepoint Designer, or in application data or the registry that I can kill off the Sharepoint Designer web site that's associated with a folder on my hard drive?

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • PHP, jQuery and Ajax Object Orientation

    - by pastylegs
    I'm a fairly experienced programmer getting my head around PHP and Ajax for the first time, and I'm having a bit of trouble figuring out how to incorperate object oriented PHP into my ajax webapp. I have an admin page (admin.php) that will load and write information (info.xml) from an XML file depending on the users selection of a form on the admin page. I have decided to use an object (ContentManager.php) to manage the loading and writing of the XML file to disk, i.e : class ContentManager{ var $xml_attribute_1 ... function __construct(){ //load the xml file from disk and save its contents into variables $xml_attribute = simplexml_load_file(/path/to/xml) } function get_xml_contents(){ return xml_attribute; } function write_xml($contents_{ } function print_xml(){ } } I create the ContentManager object in admin.php like so <?php include '../includes/CompetitionManager.php'; $cm = new CompetitionManager() ?> <script> ...all my jquery </script> <html> ... all my form elements </html> So now I want to use AJAX to allow the user to retrieve information from the XML file via the ContentManger app using an interface (ajax_handler.php) like so <?php if(_POST[]=="get_a"){ }else if() } ... ?> I understand how this would work if I wasn't using objects, i.e. the hander php file would do a certain action depending on a variable in the .post request, but with my setup, I can't see how I can get a reference to the ContentManager object I have created in admin.php in the ajax_handler.php file? Maybe my understanding of php object scope is flawed. Anyway, if anyone can make sense of what I'm trying to do, I would appreciate some help!

    Read the article

  • Problem with stack based implementation of function 0x42 of int 0x13

    - by IceCoder
    I'm trying a new approach to int 0x13 (just to learn more about the way the system works): using stack to create a DAP.. Assuming that DL contains the disk number, AX contains the address of the bootable entry in PT, DS is updated to the right segment and the stack is correctly set, this is the code: push DWORD 0x00000000 add ax, 0x0008 mov si, ax push DWORD [ds:(si)] push DWORD 0x00007c00 push WORD 0x0001 push WORD 0x0010 push ss pop ds mov si, sp mov sp, bp mov ah, 0x42 int 0x13 As you can see: I push the dap structure onto the stack, update DS:SI in order to point to it, DL is already set, then set AX to 0x42 and call int 0x13 the result is error 0x01 in AH and obviously CF set. No sectors are transferred. I checked the stack trace endlessly and it is ok, the partition table is ok too.. I cannot figure out what I'm missing... This is the stack trace portion of the disk address packet: 0x000079ea: 10 00 adc %al,(%bx,%si) 0x000079ec: 01 00 add %ax,(%bx,%si) 0x000079ee: 00 7c 00 add %bh,0x0(%si) 0x000079f1: 00 00 add %al,(%bx,%si) 0x000079f3: 08 00 or %al,(%bx,%si) 0x000079f5: 00 00 add %al,(%bx,%si) 0x000079f7: 00 00 add %al,(%bx,%si) 0x000079f9: 00 a0 07 be add %ah,-0x41f9(%bx,%si) I'm using qemu latest version and trying to read from hard drive (0x80), have also tried with a 4bytes alignment for the structure with the same result (CF 1 AH 0x01), the extensions are present.

    Read the article

  • SQL Agent Logon - What is going on?

    - by James Wiseman
    I have a DTSX package that is called from a SQL Agent Job. The DTSX package references a file at a fixed location (e.g. e:\mssql\myfile.txt). On most machines, this location exists, but on some I have to manually map this (which is not a problem - I know a better solution would be to use package conifgurations to dynamically pull the file location, but this is not an option here - and anyway I'd like to understand what is going on) I have set up the agent service to run as a specific user (e.g. myuser) When I log on as this user and map the directory, then run the dtsx package directly, then all goes well. When I run the package through a SQL Agent Job, the file cannot be found. If I add a command line job step to the agent job to map the drive: net use e: \\svr\location Then all works file also. So what is going on in the backgound? How come the SQL Agent user requries the drive mapping even when I am logged in as this user.

    Read the article

  • Error doing an MSBuild on a CLR Storedprocedure project on Build Server

    - by CraftyFella
    Hi, When building a CLR Storedprocedure Project using MSBuild on our build server (Team City) we're getting the following error: error MSB4019: The imported project "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\SqlServer.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk I've checked to see if the file exists on disk and sure enough it doesn't. I've checked on my own machine and it does exist. I don't really want to start copying over files manually to the build server. Here's the line from the csproj file which is being imported to the proj file: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildToolsPath)\SqlServer.targets" /> Here's the line from the proj file which is begin run by our Team City Server: <Import Project="..\$(ProjectName).csproj"/> My question is really: Where does this file comes from? Is it part of the Visual Studio install for example.. Or is there some re-distribution package somewhere to allow me to compile this project on our build server? Thanks BTW.. if i just copy the file onto the Build server it does actually work. Dave

    Read the article

  • VBScript Catching Erroring Varialble Value

    - by Soren
    I have a VB Script (.vbs file) that is just a simple directory listing of a drive. It will the base of a drive back up script. But when running it as it is below, I am getting a Permission Denied error on some folder. What I need to find is what that folder is so I can figure out what the problem is with the folder. The line that is giving the error is "For Each TempFolder In MoreFolders". So what I am trying to figure out is how to WScript.Echo the current path (objDirectory) if there is an error. I am not sure if it matters much, but just in case, the error that I am getting is Permission Denied 800A0046 on line 12. So some folder, I do not know which one, is not letting me look inside. Set WSShell = WScript.CreateObject("WScript.Shell") Set objFSO = CreateObject ("Scripting.FileSystemObject") Dim FolderArr() FolderCount = 0 TopCopyFrom = "G:\" Sub WorkWithSubFolders(objDirectory) Set MoreFolders = objDirectory.SubFolders 'The next line is where the error occurs (line 12) For Each TempFolder In MoreFolders FolderCount = FolderCount + 1 ReDim Preserve FolderArr(FolderCount) FolderArr(FolderCount) = TempFolder.Path ' WScript.Echo TempFolder.Path WorkWithSubFolders(TempFolder) Next End Sub ReDim Preserve FolderArr(FolderCount) FolderArr(FolderCount) = TopCopyFrom Set objDirectory = objFSO.GetFolder(TopCopyFrom) WorkWithSubFolders(objDirectory) Set objDirectory = Nothing WScript.Echo "FolderCount = " & FolderCount WScript.Sleep 30000 Set objFSO = Nothing Set WSShell = Nothing

    Read the article

  • How to create resource manager in ASP.NET

    - by Bart
    Hello, I would like to create resource manager on my page and use some data stored in my resource files. (default.aspx.resx and default.aspx.en.resx) The code looks like this: System.Resources.ResourceManager myResourceManager = System.Resources.ResourceManager.CreateFileBasedResourceManager("resource", Server.MapPath("App_LocalResources") + Path.DirectorySeparatorChar, null); if (User.Identity.IsAuthenticated) { Welcome.Text = myResourceManager.GetString("LoggedInWelcomeText"); } else { Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); } but when i compile and run it on my local server i get this type of error: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Source Error: Line 89: else Line 90: { Line 91: Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); Line 92: } Line 93: can you please assist me with this issue?

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • Programmatically modifying a file on Windows Server 2008 (Web Ed.)

    - by Tom
    I have written a .NET 2008 application, incorporating Microsoft.Office.Interop.Excel, that modifies an existing Excel 2007 spreadsheet. It works perfectly on my WinXP development computer. When I upload the app to a Microsoft Web Server 2008, it opens the file and reads from the file, but when the app tries to save the file, it throws this exception: "System.Runtime.InteropServices.COMException (0x800A03EC): 'july2009.xlsx' is read-only. To save a copy, click OK, then give the workbook a new name in the Save As dialog box." The file is NOT read-only, nor is it opened by any other user or app. The app and the Excel file both reside on the D: (data-only) drive. My first instinct was to look at file permissions. When nothing else worked, I literally created a temporary Group, added EVERY user and security entity to it and granted the group full control of the entire D: drive. No luck. Then I tried manually elevating the permission by running my app as administrator. No luck. Finally, I copied the file to my XP development computer and ran the app there. Of course it worked perfectly. Can anyone please tell me how to give my program permission to edit a file on Server 2008? Thanks!

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • How do I programmatically create a bootable CD?

    - by Nicholas Flynt
    I'm using a barebones tutorial as the basis for an OS I'm working on, and it seems to be an older tutorial: it has be compiling the kernel down to a floppy image, and then loading it with GRUB. Basically, I still want to use GRUB, but I'd like to have my OS run from a CD instead. The main reason is that I don't actually have a real floppy drive available (I'm testing in VirtualBox currently) and I thus have no way to test my OS on real hardware. I've been poking around on the net, and I can find lots of utilities that create a bootable CD from a floppy image, but these all seem to require an actual floppy drive, plus it's not really what I'm looking for. I'd like to be able to end up with a bootable CD during my make step ideally, without needing to first place the image on a floppy, which seems rather pointless. I guess the easy way to answer this: How do I set up GRUB to read my kernel image from a CD? Will I need a special utility to do this from Windows? (The kernel can't compile itself yet, that's not for a looong while) Thanks!

    Read the article

  • Creating a single CRM plugin DLL to store in the CRM database

    - by Shaamaan
    Since the suggested way of storing plugins in MS CRM is via the CRM database, I figured it's about time to do something about the method I'm currently using, which is storing the DLLs on the disk. The trouble however is that I don't know how to embed all the other various bits that are needed by the DLL: the localization resource files (which are kept in another folder) and some referenced DLLs from the latest SDK (which had to be manually placed in the bin\assembly folder). At this point, I'm not even entirely sure this is possible. So far I've tried to solve the localization problem by changing the build action on the resource files to "Content" or "Resource" and tested this solution (still keeping the location on-disk, but without the added localization folder). This didn't work: when I purposely generated a validation error in one of the plugins, I got the default language message (English) despite having a different language selected in the CRM. I've faced a similar problem when trying to add some of the referenced DLL files (namely the new SDK DLLs: xrm.portal, xrm.portal.files and xrm.client). When I tried to store the plugin in the database (skipping for a moment the localization issue), I got a CRM error saying it cannot find the XRM.Client assembly or one of it's dependencies. I know I could use ILMerge to put the whole thing together, but I've got a gut feeling telling me this isn't really a good idea. Any hints or suggestions on this issue would be great.

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Help Me With This Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • Cocoa/AppleScript move file

    - by bogdan
    I have a list of file paths and a destination path. I need something (AppleScript, Cocoa) that will move the files from one location to an other. I first tried using the following AppleScript, just to see what happens: set the_folder to (choose folder) tell application "Finder" move selection to the_folder end tell The problem is that it just blindly tries to move a file, nothing like the way Finder actually moves files (i.e. if a file with that name already exists, the AppleScript just throws an error, while Finder would ask you if you want to replace the file). The solution I came up with involved NSFileManager. I won't post the code because it's quite long, but basically I just check if the file already exists before trying to move, and if it exists a NSAlert with Replace/Cancel buttons appear. I have 2 remaining problems: Authorization - if you try to do something to files where you don't have access, the Finder would ask you to authorize. My code just fails... Moving to external drives - when you try to move a file to a different drive, NSFileManager copies the file and then deletes the original. The problem is that NSFileManager doesn't provide anything which I could use to display a progress indicator of what's happening during the copy. Is there anything I could use that is able to move files without these problems? The way I see it, I'm pretty much stuck with checking if the files are writable by the current user and authorize NSFileManager if not (from my understanding of the Authorization Services, this will be quite hard to implement). Oh and, I would also need to check if the destination is on the same drive and if not, implement something with FSCopyObjectAsync so that it shows a progress indicator... Thanks!

    Read the article

  • return an ArrayList method

    - by Bopeng Liu
    This is a drive method for two other classes. which i posted here http://codereview.stackexchange.com/questions/33148/book-program-with-arraylist I need some help for the private static ArrayList getAuthors(String authors) method. I am kind a beginner. so please help me finish this drive method. or give me some directions. Instruction some of the elements of the allAuthors array contain asterisks “*” between two authors names. The getAuthors method uses this asterisk as a delimiter between names to store them separately in the returned ArrayList of Strings. import java.util.ArrayList; public class LibraryDrive { public static void main(String[] args) { String[] titles = { "The Hobbit", "Acer Dumpling", "A Christmas Carol", "Marley and Me", "Building Java Programs", "Java, How to Program" }; String[] allAuthors = { "Tolkien, J.R.", "Doofus, Robert", "Dickens, Charles", "Remember, SomeoneIdont", "Reges, Stuart*Stepp, Marty", "Deitel, Paul*Deitel, Harvery" }; ArrayList<String> authors = new ArrayList<String>(); ArrayList<Book> books = new ArrayList<Book>(); for (int i = 0; i < titles.length; i++) { authors = getAuthors(allAuthors[i]); Book b = new Book(titles[i], authors); books.add(b); authors.remove(0); } Library lib = new Library(books); System.out.println(lib); lib.sort(); System.out.println(lib); } private static ArrayList<String> getAuthors(String authors) { ArrayList books = new ArrayList<String>(); // need help here. return books; } }

    Read the article

  • Why would I be seeing execution timeouts when setting $_SESSION values?

    - by Kev
    I'm seeing the following errors in my PHP error logs: PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\index.php on line 3 PHP Fatal error: Maximum execution time of 60 seconds exceeded in D:\sites\s105504\www\search.php on line 4 The lines in question are: index.php: 01 <?php 02 session_start(); 03 ob_start(); 04 error_reporting(E_All); 05 $_SESSION['nav'] = "range"; // <-- Error generated here search.php 01 <?php 02 03 session_start(); 04 $_SESSION['nav'] = "range"; 05 $_SESSION['navselected'] = 21; // <-- Error generated here Would it really take as long as 60+ seconds to assign a $_SESSION[] value? The platform is: Windows 2003 SP2 + IIS6 FastCGI for Windows 2003 (original RTM build) PHP 5.2.6 Non thread-safe build There aren't any issues with session data files being cleared up on the server as sessions expire. The oldest sess_XXXXXXXXXXXXXX file I'm seeing is around 2 hours old. There are no disk timeouts evident in the event logs or other such disk health issues that might suggest difficulty creating session data files. The site is also on a server that isn't under heavy load. The site is busy but not being hammered and is very responsive. It's just that we get these errors, three or four in a row, every three or four hours. I should also add that I'm not the original developer of this code and that it belongs to a customer who's developer has long since departed.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >