Search Results

Search found 8914 results on 357 pages for 'disk defragmentation'.

Page 172/357 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • NASM - Load code from USB Drive

    - by new123456
    Hola, Would any assembly gurus know the argument (register dl) that signifies the first USB drive? I'm working through a couple of NASM tutorials, and would like to get a physical boot (I can get a clean one with qemu). This is the section of code that loads the "kernel" data from disk: loadkernel: mov si, LMSG ;; 'Loading kernel',13,10,0 call prints ;; ex puts() mov dl, 0x00 ;; The disk to load from mov ah, 0x02 ;; Read operation mov al, 0x01 ;; Sectors to read mov ch, 0x00 ;; Track mov cl, 0x02 ;; Sector mov dh, 0x00 ;; Head mov bx, 0x2000 ;; Buffer end mov es, bx mov bx, 0x0000 ;; Buffer start int 0x13 jc loadkernel mov ax, 0x2000 mov ds, ax jmp 0x2000:0x00 If it makes any difference, I'm running a stock Dell Inspiron 15 BIOS. Apparently, the correct value for me is 0x80. The BIOS loads the hard drives and labels them starting at 0x80 according to this answer. My particular BIOS decides to load the USB drive up as the first, for some reason, so I can boot from there.

    Read the article

  • Detect a USB drive being inserted - Windows Service

    - by Tom Bell
    I am trying to detect a USB disk drive being inserted within a Windows Service, I have done this as a normal Windows application. The problem is the following code doesn't work for volumes. Registering the device notification: DEV_BROADCAST_DEVICEINTERFACE notificationFilter; HDEVNOTIFY hDeviceNotify = NULL; ::ZeroMemory(&notificationFilter, sizeof(notificationFilter)); notificationFilter.dbcc_size = sizeof(DEV_BROADCAST_DEVICEINTERFACE); notificationFilter.dbcc_devicetype = DBT_DEVTYP_DEVICEINTERFACE; notificationFilter.dbcc_classguid = ::GUID_DEVINTERFACE_VOLUME; hDeviceNotify = ::RegisterDeviceNotification(g_serviceStatusHandle, &notificationFilter, DEVICE_NOTIFY_SERVICE_HANDLE); The code from the ServiceControlHandlerEx function: case SERVICE_CONTROL_DEVICEEVENT: PDEV_BROADCAST_HDR pBroadcastHdr = (PDEV_BROADCAST_HDR)lpEventData; switch (dwEventType) { case DBT_DEVICEARRIVAL: ::MessageBox(NULL, "A Device has been plugged in.", "Pounce", MB_OK | MB_ICONINFORMATION); switch (pBroadcastHdr->dbch_devicetype) { case DBT_DEVTYP_DEVICEINTERFACE: PDEV_BROADCAST_DEVICEINTERFACE pDevInt = (PDEV_BROADCAST_DEVICEINTERFACE)pBroadcastHdr; if (::IsEqualGUID(pDevInt->dbcc_classguid, GUID_DEVINTERFACE_VOLUME)) { PDEV_BROADCAST_VOLUME pVol = (PDEV_BROADCAST_VOLUME)pDevInt; char szMsg[80]; char cDriveLetter = ::GetDriveLetter(pVol->dbcv_unitmask); ::wsprintfA(szMsg, "USB disk drive with the drive letter '%c:' has been inserted.", cDriveLetter); ::MessageBoxA(NULL, szMsg, "Pounce", MB_OK | MB_ICONINFORMATION); } } return NO_ERROR; } In a Windows application I am able to get the DBT_DEVTYP_VOLUME in dbch_devicetype, however this isn't present in a Windows Service implementation. Has anyone seen or heard of a solution to this problem, without the obvious, rewrite as a Windows application?

    Read the article

  • object / class methods serialized as well?

    - by Mat90
    I know that data members are saved to disk but I was wondering whether object's/class' methods are saved in binary format as well? Because I found some contradictionary info, for example: Ivor Horton: "Class objects contain function members as well as data members, and all the members, both data and functions, have access specifiers; therefore, to record objects in an external file, the information written to the file must contain complete specifications of all the class structures involved." and: Are methods also serialized along with the data members in .NET? Thus: are method's assembly instructions (opcodes and operands) stored to disk as well? Just like a precompiled LIB or DLL? During the DOS ages I used assembly so now and then. As far as I remember from Delphi and the following site (answer by dan04): Are methods also serialized along with the data members in .NET? sizeof(<OBJECT or CLASS>) will give the size of all data members together (no methods/procedures). Also a nice C example is given there with data and members declared in one class/struct but at runtime these methods are separate procedures acting on a struct of data. However, I think that later class/object implementations like Pascal's VMT may be different in memory.

    Read the article

  • C# Step by Step Execution

    - by Sheldon
    Hi. I'm building an app that uses and scanner API and a image to other format converter. I have a method (actually a click event) that do this: private void ButtonScan&Parse_Click(object sender, EventArgs e) { short scan_result = scanner_api.Scan(); if (scan_result == 1) parse_api.Parse(); // This will check for a saved image the scanner_api stores on disk, and then convert it. } The problem is that the if condition (scan_result == 1) is evaluated inmediatly, so it just don't work. How can I force the CLR to wait until the API return the convenient result. NOTE Just by doing something like: private void ButtonScan&Parse_Click(object sender, EventArgs e) { short scan_result = scanner_api.Scan(); MessageBox.Show("Result = " + scan_result); if (scan_result == 1) parse_api.Parse(); // This will check for a saved image the scanner_api stores on disk, and then convert it. } It works and display the results. Is there a way to do this, how? Thank you very much!

    Read the article

  • RewriteRule to store thousands of files in subdirectories

    - by Brandon
    I have a website that will have millions of pages in a directory. I'd like to store those files on-disk in a bunch of subdirectories based on the first characters of the page name. For example http://mysite.com/hugedir/somefile.html would be stored in /var/www/html/hugedir/s/o/m/e/f/ile.html That is fairly trivial to do with a RewriteRule like so: RewriteRule ^hugedir/(.)(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}/$6.html RewriteRule ^hugedir/(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}.html RewriteRule ^hugedir/(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}.html RewriteRule ^hugedir/(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}.html RewriteRule ^hugedir/(.)(.*).html /hugedir/{$1}/{$2}.html RewriteRule ^hugedir/(.*).html /hugedir/{$1}.html However, the file name may contain hyphens or other non-standard characters and I'd really like to avoid having a directory named with a strange character. Ideally, I'd like to have a list of 'approved' characters and either eliminate or transform the unapproved characters to an underscore. Can anybody think of a way to do that? Or something else equivalent? Part of the requirement is that these be physical files on disk and it not be parsed with a scripting language.

    Read the article

  • Commited memory goes to physical RAM or reserves space in the paging file?

    - by Sil
    When I do VirtualAlloc with MEM_COMMIT this "Allocates physical storage in memory or in the paging file on disk for the specified reserved memory pages" (quote from MSDN article http://msdn.microsoft.com/en-us/library/aa366887%28VS.85%29.aspx). All is fine up until now BUT: the description of Commited Bytes Counter says that "Committed memory is the physical memory which has space reserved on the disk paging file(s)." I also read "Windows via C/C++ 5th edition" and this book says that commiting memory means reserving space in the page file.... The last two cases don't make sense to me... If you commit memory, doesn't that mean that you commit to physical storage (RAM)? The page file being there for swaping out currently unused pages of memory in case memory gets low. The book says that when you commit memory you actually reserve space in the paging file. If this were true than that would mean that for a committed page there is space reserved in the paging file and a page frame in physical in memory... So twice as much space is needed ?! Isn't the page file's purpose to make the total physical memory larger than it actually is? If I have a 1G of RAM with a 1G page file = 2G of usable "physical memory"(the book also states this but right after that it says what I discribed at point 2). What am I missing? Thanks.

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • java: assigning object reference IDs for custom serialization

    - by Jason S
    For various reasons I have a custom serialization where I am dumping some fairly simple objects to a data file. There are maybe 5-10 classes, and the object graphs that result are acyclic and pretty simple (each serialized object has 1 or 2 references to another that are serialized). For example: class Foo { final private long id; public Foo(long id, /* other stuff */) { ... } } class Bar { final private long id; final private Foo foo; public Bar(long id, Foo foo, /* other stuff */) { ... } } class Baz { final private long id; final private List<Bar> barList; public Baz(long id, List<Bar> barList, /* other stuff */) { ... } } The id field is just for the serialization, so that when I am serializing to a file, I can write objects by keeping a record of which IDs have been serialized so far, then for each object checking whether its child objects have been serialized and writing the ones that haven't, finally writing the object itself by writing its data fields and the IDs corresponding to its child objects. What's puzzling me is how to assign id's. I thought about it, and it seems like there are three cases for assigning an ID: dynamically-created objects -- id is assigned from a counter that increments reading objects from disk -- id is assigned from the number stored in the disk file singleton objects -- object is created prior to any dynamically-created object, to represent a singleton object that is always present. How can I handle these properly? I feel like I'm reinventing the wheel and there must be a well-established technique for handling all the cases.

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • How to force client browser to download images from server rather using its cache

    - by anonim.developer
    Assume a simple aspx data entry page in which admin user can upload an image as well as some other data. They are stored in database and the next time admin visits that page to edit record, image data fetched and a preview generated and saved to disk (using GDI+) and the preview is shown in an image control. This procedure works fine for the first time however if the image changes (a new one uploaded) the next time the page is surfed it shows previously uploaded image. I debugged the application and everything works correct. The new image data is in database and new preview is stored in Temp location however the page shows previous one. If I refresh the page it shows the new image preview. I should mention that preview is always saved to disk with one name (id of each record as the name). I think that is because of IE and other browsers use client cache instead of loading images each time a page is surfed. I wonder if there is a way to force the client browser to refresh itself so the newly uploaded image is shown without user intervention. Thanks and appreciation in advance,

    Read the article

  • How to write a cgi script in perl that accepts output (an image file) from a urlconnection in a Java applet and writes it to the server?

    - by Brad Rock
    I created a Java applet for my web page that creates an image file. I want users to be able to run the applet and create unique images, click a button, and have the image they created be saved to the web server. I think I have the code down for writing the image to a URLconnection in the Java applet itself after having successfully written a file while running the applet on my system rather than the web page and saving a file to the local disk, and then altering a few things to write to a URLConnection instead of to the local disk (although we shall see how that works too). I am now trying to write a cgi script in perl that takes the image output from the URLConnection and writes the image to a file on my web server (note: I am a newbie to perl). I have found many examples of how to do something similar with simple text coming from an applet and then writing it to a text file, but I want to know how to apply the same concept to an image. Particularly, how do I read in the image? I've seen text input get read by using read(STDIN, $some_variable)--does the same thing work with an image? Likewise, how do I write the image file? What function do I use? Thanks for your help. I know that I am rather naive about all of this.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 20, 2012

    CodePlex Daily Summary for Tuesday, March 20, 2012Popular ReleasesNearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (1.2.1420.191): BUG FIXED : When loading scripts from disk, the import of the web resource didn't do anything When scripts were saved to disk, it wasn't possible to edit them with an editorSQL Monitor - managing sql server performance: SQLMon 4.2 alpha 12: 1. improved process visualizer, now shows how many dead locks, and what are the locked objects 2. fixed some other problems.Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionDaun Management Studio: Daun Management Studio 0.1 (Alpha Version): These are these the alpha application packages for Daun Management Studio to manage MongoDB Server. Please visit our official website http://www.daun-project.comSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.28: changes NEW: Added Support for "PixHub.eu" linksSmartNet: V1.0.0.0: DY SmartNet ?????? V1.0callisto: callisto 2.0.21: Added an option to disable local host detection.Javascript .NET: Javascript .NET v0.6: Upgraded to the latest stable branch of v8 (/tags/3.9.18), and switched to using their scons build system. We no longer include v8 source code as part of this project's source code. Simultaneous multithreaded use of v8 now supported (v8 Isolates), although different contexts may not share objects or call each other. 64-bit .Net 4.0 DLL now included. (Download now includes x86 and x64 for both .Net 3.5 and .Net 4.0.)MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valueGoogle Books Downloader for Windows: Google Books Downloader-2.0.0.0.: Google Books DownloaderFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Release Candidate: Your feedback is welcome - and this is your last chance to get your fixes in for this version! Includes installer for both Feature Server extension and Desktop extension, enhanced functionality for the Desktop tools, and enhanced built-in Javascript Editor for the Feature Server component. This release candidate includes fixes to beta 4 that accommodate domain users for setting up the Server Component, and fixes for reporting/uploading references tracked in the revision table. See Code In-P...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...New Projects{3S} SQL Smart Security "Protect your T-SQL know-how!": {3S} SQL Smart Security is an add-in which can be installed in Microsoft SQL Server Management Studio (SSMS). It enables software companies to create a secured content for database objects. The add-in brings much higher level of protection in comparison with SQL Server built in WITH ENCRYPTION feature.BETA - Content Slider for SharePoint 2010 / Office 365: SharePoint Banner / SharePoint 2010 Sliding Banner / Content Slider tools in Office 365/ Sliding Content in SharePoint is a general tool which could be used for sliding Banners or any other sliding content to be placed on any Office 365 / SharePoint 2010 / SharePoint Foundation.BF3 Development Server: The main issue of this project is to deliver a test server to all developers working on RCon (Remote-Administration-Console) Tools for Battlefield 3. Actually the only possibility to test the work made is to hire a real Game Server. BizTalk Server 2010 TCP/IP Adapter: This project is migration of existing BizTalk server 2009 TCPIP adapter to BizTalk server 2010. I have made few configuration changes which are making this adapter and installation compatible to BizTalk Server 2010. I have not modified adapter source code.BryhtCommon: It`s for easy to develope WP7 ,it contains some useful method Bug.Net Defect Tracking Components: Bug.Net server-side controls and components to add defect (bug) tracking to your current ASP.NET website.Customize Survey With Client Object Model: Customize OOB Survey/Vote/Poll in Share?Point With Client Object Model Visit http://swatipoint.blogspot.in/2011/12/sharepoint-client-object-modellist.html for more detailsDropboxToDo: A simple todo, synchronizing by Dropbox or, in future, by SkydriveFoggy: Foggy is a WPF dashboard for FogBugz information.Fort Myers High School Website: A website for Fort Myers High School in Fort Myers, Florida. This website will allow for both students and parents to better interact with the school. Developed in ASP.net (C#).Gonte.DataAccess: Data access for NET. It's developed in C#.Gonte.ObjectModel: Metadata about objects. It's developed in C#Gonte.SqlGenerator: Sql Generator It's developed in C#.kLib: This project space is for datastructures and classes, which should always be available. Any developer should use these in their projects. Liuyi.Phone.CharmScreen: Liuyi.Phone.CharmScreen Liuyi windows phone appLoU: Lord of Ultima helper suite.Managing Supplies: This WP7 project is able to manage your own suppliesNAV Fixed Assets 2012: Changes related to 'Dossier Fiscal' in Microsoft Dynamics NAV 2009 - New Model 30 - Changes to model 31 and model 32NCAA Tourney DotNetNuke Module: The NCAA Tourney is a DotNetNuke 3.X - 6.X module that allows you to add a NCAA tournament to your portal. You can allow users to record their picks for the tournament and then manage the outcome of the tournament calculating the winner of your tourney based on customizable point system. The module has been designed to be very user friendly and efficient for the end user as well as the administrator of the tournament.nothing here anymore: nothing here anymoreOrchard Dream Store Project: A simple website using Orchard CMS. For a school projectPowerShell Management Library for TEM: A project to provide a PowerShell functionality for managing your Tivoli Endpoint Manager (built upon BigFix technology). You can locally or remotely manage endpoints and relays via these simple and easy to use PowerShell Module.PrismWebBuilder: Web Builder ProjectProjeto Northwind: Northwind - FPUQLCF: QLCFShipwire API: Shipwire makes it easier for consumers of Shipwire's international shipment fulfilment service to integrate their XML API quickly and easily. Current features are: Inventory Service Rate Service (Shipping Costs) Future features are: Order Entry Service Order Tracking ServiceSmith XNA tools: Smith's XNA tools is a set of useful that i make to improve some basics features to XNA and make the Game Design More Easy.ST Recover: ST Recover can read Atari ST floppy disks on a PC under Windows, including special formats as 800 or 900 KB and damaged or desynchronized disks, and produces standard .ST disk image files. Then the image files can be read in ST emulators as WinSTon or Steem.SyncSMS: Windows desktop client for the Android App SyncSMS. This code is not affiliated with SyncSMS in any way,tempzz: tempzzTruxtor: Truxtor modular concept for electronic gadgetsTT SA TEST1: TT SA Test 1VisualQuantCode: Neuroquants is a library in c# for quants It's developed in c#.Weeps: Generate Bass and drum line in type of midi to be guitar's backing track that was playing by userxxtest: xxtest???-????? "???????????": ?????-????????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.????? ???????????? Digital Design 2012. ??? ????. ?????? ?: ?????????????????? ??????? ?????. ??????? ?????????? ????????? ????????. ?????????? ?????????????? ??????? ??????? ? ????? ???????????? Digital Design 2012.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • EFI VMware Virtual SCSI Hard Drive (0.0) ... unsuccessful

    - by Ravichandra
    I installed the VMware-workstation-full-8.0.0-471780 and created new virtual michin with Mac OSX 10.6.6 with 4GB Ram, 1 Processer, 40GB Hard Disk(SCSI) and General -- Guest Operating system : Apple Mac OS X, Version: Mac OS Server 10.6. when I Power On the the VMWare gives the follwing unsuccessfull comments. -- EFI VMware Virtual SCSI Hard Drive (0.0) ... unsuccessful -- EFI VMware Virtual IDE CDROM Drive (IDE 1.0) ... unsuccessful Could you please help me. how to solve this errors

    Read the article

  • VBoxManage: The given path [UUID] is not fully qualified

    - by Tgr
    $ vboxmanage clonehd foo.vmdk bar.vmdk 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VMDK'. UUID: f9dffd47-4907-44a5-b3f9-40eba3953d24 $ vboxmanage showhdinfo "f9dffd47-4907-44a5-b3f9-40eba3953d24" VBoxManage: error: The given path 'f9dffd47-4907-44a5-b3f9-40eba3953d24' is not fully qualified I can just use the path name, but it is annoying. What do I need to make the hd commands work with an UUID?

    Read the article

  • How to virtualize a qnap NAS with virtualbox?

    - by l1zard
    I want to deploy an owncloud based cloud service at a qnap NAS. And i need to test it before we can use it on the production system. there is a french site which made it possible to download a virtualbox disk image but the documentaion about how to use this image is very poor or in french, so i can't read it. Has anybody ever been successfully virtualized a qnap NAS and could tell me exactly how this can be acclomlish?

    Read the article

  • XAMPPErrorDomain error 1 for changing MySQL data dirs

    - by Petruza
    Hi, I tried to move my mysql data dir from one disk to another, I modified the datadir parameter in my.cnf, but when XAMPP, and more precisely, MySQL starts, this error is displayed and MySQL doesn't run. Operation could not be completed. (XAMPPErrorDomain error 1.) What's the problem here? I correctly specified the new path.

    Read the article

  • Windows server 2008 R2 + VMWare Workstation 7.0

    - by G0RY
    Hi, im trying to install Windows 2008 R2 on VMWare, and I got this error: A required CD/DVD drive device driver is missing. If you have a driver floppy disk, CD, DVD or usb flash drive, please insert it now. I was trying to install it from ISO, daemon and from DVD drive, but always get this error.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >