Search Results

Search found 9599 results on 384 pages for 'delete overload'.

Page 298/384 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Does it make the game more fun when the user is forced to progress through the levels sequentially rather than letting them pick and play?

    - by BeachRunnerJoe
    Hello. For the first time in my game, I'm stuck with a real design dilemma. I guess that's a good thing ;) I'm building a word puzzle game that has five levels, each with 30 puzzles. Currently, the user has to solve one puzzle at a time before moving to the next. However, I'm finding the user occasionally gets stuck on a puzzle, at which point they can no longer play until they solve it. This is obviously bad because many people will probably just quit playing the game and delete the app. The only elegant solution I can find to helping the player get unstuck is changing the design of the game to allow the users to pick any puzzle to play at any time. This way, if they get stuck, they can come back to it later and at least they have other puzzles to play in the meantime. It's my opinion, however, that this new flow design doesn't make the game as fun as the original flow design where the player has to complete a puzzle before moving to the next. To me, it's like anything else, when you only have one of something, it's more enjoyable, but when you have 30 of something, it's far less enjoyable. In fact, when I present the user with 30 puzzles to choose from, I'm concerned I might be making them feel like it's a lot of work they have to do and that's bad. I even had a tester voluntarily tell me that being forced to complete a puzzle before moving to the next is actually motivating. My questions are... Do you agree/disagree? Do you have any suggestions for how I can help the player get unstuck? Thanks so much in advance for your thoughts! EDIT: I should mention that I've already considered a few other solutions to helping the user get unstuck, but none of them seem like good ideas. They are... Add more hints: Currently, the user gets two hints per puzzle. If I increase the hint count, it only makes the game more easy and still leaves the possibility of the user getting stuck. Add a "Show Solution" button: This seems like a bad idea because it's my opinion this takes the fun out of the game for many people who would probably otherwise solve the puzzle if they didn't have the quick option to see the solution.

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • Curating projects of deceased friends

    - by Ant
    A very good friend of mine, and an avid programmer, recently passed away. He left nearly 40 projects on BitBucket. Most of them are public, but a few of them are marked as private. I've decided to take on curation duties for the projects rather than leave his work to disappear. If you have been in the same situation, what did you do? Did you open-source everything? Continue development? Delete it all? I'm very interested to hear other people's experiences. There are a few reasons why some of the projects are marked as private (private projects on BitBucket are visible only to invited users and the original creator): One of them is an iOS web app that was free in the app store. I've had to remove the app from the store as I'm shutting down his web sites as a favour to his widow. However, I've already made the app public under the GPL v3 (he was a big GPL supporter). One of them contains proprietary code. It can't be open-sourced. Others are very much work-in-progress. I don't know if he intended to make them into hosted, paid services or if he wanted to give the code away under an open-source licence when they were finished. Here's a list of the private projects: Some kind of living cell simulator that uses SBML along with Runge-Kutta and Euler algorithms to do... something. There's a fair amount of code here but I don't know what it does or how far along it is. No docs. An accountacy application; it seems to have a solid DB design behind it but there's little code on top of that. A website whose purpose is to suggest good restaurants. Built on yii. Seems to have a lot of code but I'd need to set up a WAMP stack to see how far along it is. A website intended to host memorials to people who suffered from the same problem he was. Built on Joomla. I'm not sure how much of the code is just Joomla and how much is custom; again, I'd need to get Joomla running to find out. I'd just introduced him to Mercurial and BitBucket. All of the private projects are single commits of codebases he wasn't using version control with/was previously using SVN. I don't have the SVN repositories so I can't see the commit logs.

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    - by Tum
    This question is about good design practice in programming. Let see this example, we have 2 interrelated tables: Table1 textID - text 1 - love.. 2 - men... ... Table2 rID - textID 1 - 1 2 - 2 ... Note: In Table1: textID is auto_increment primary key In Table2: rID is auto_increment primary key & textID is foreign key The relationship is that 1 rID will have 1 and only 1 textID but 1 textID can have a few rID. So, when table1 got modification then table2 should be updated accordingly. Ok, here is a fictitious example. You build a very complicated system. When you modify 1 record in table1, you need to keep track of the related record in table2. To keep track, you can do like this: Option 1: When you modify a record in table1, you will try to modify a related record in table 2. This could be quite hard in term of programming expecially for a very very complicated system. Option 2: instead of modifying a related record in table2, you decided to delete old record in table 2 & insert new one. This is easier for you to program. For example, suppose you are using option2, then when you modify record 1,2,3,....,100 in table1, the table2 will look like this: Table2 rID - textID 101 - 1 102 - 2 ... 200 - 100 This means the Max of auto_increment IDs in table1 is still the same (100) but the Max of auto_increment IDs in table2 already reached 200. what if the user modify many times? if they do then the table2 may run out of records? we can use BigInt but that make the app run slower? Note: If you spend time to program to modify records in table2 when table1 got modified then it will be very hard & thus it will be error prone. But if you just clear the old record & insert new records into table2 then it is much easy to program & thus your program is simpler & less error prone. So, is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    Read the article

  • MVC2 and MVC Futures causing RedirectToAction issues

    - by Darragh
    I've been trying to get the strongly typed version of RedirectToAction from the MVC Futures project to work, but I've been getting no where. Below are the steps I've followed, and the errors I've encountered. Any help is much appreciated. I created a new MVC2 app and changed the About action on the HomeController to redirect to the Index page. Return RedirectToAction("Index") However, I wanted to use the strongly typed extensions, so I downloaded the MVC Futures from CodePlex and added a reference to Microsoft.Web.Mvc to my project. I addded the following "import" statement to the top of HomeContoller.vb Imports Microsoft.Web.Mvc I commented out the above RedirectToAction and added the following line: Return RedirectToAction(Of HomeController)(Function(c) c.Index()) So far, so good. However, I noticed if I uncomment out the first (non Generic) RedirectToAction, it was now causing the following compile error: Error 1 Overload resolution failed because no accessible 'RedirectToAction' can be called with these arguments: Extension method 'Public Function RedirectToAction(Of TController)(action As System.Linq.Expressions.Expression(Of System.Action(Of TController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Data type(s) of the type parameter(s) cannot be inferred from these arguments. Specifying the data type(s) explicitly might correct this error. Extension method 'Public Function RedirectToAction(action As System.Linq.Expressions.Expression(Of System.Action(Of HomeController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Value of type 'String' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Action(Of mvc2test1.HomeController))'. Even though intelli-sense was showing 8 overloads (the original 6 non-generic overloads, plus the 2 new generic overloads from the Futures assembly), it seems when trying to complie the code, the compiler would only 'find' the 2 non-gneneric extension methods from the Futures assessmbly. I thought this might be an issue that I was using conflicting versions of the MVC2 assembly, and the futures assembly, so I added MvcDiaganotics.aspx from the Futures download to my project and everytyhing looked correct: ASP.NET MVC Assembly Information (System.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM (2.0.50217.0) Full name: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Code base: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Web.Mvc/2.0.0.0__31bf3856ad364e35/System.Web.Mvc.dll Deployment: GAC-deployed ASP.NET MVC Futures Assembly Information (Microsoft.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM Futures (2.0.50217.0) Full name: Microsoft.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null Code base: file:///xxxx/bin/Microsoft.Web.Mvc.DLL Deployment: bin-deployed This is driving me crazy! Becuase I thought this might be some VB issue, I created a new MVC2 project using C# and tried the same as above. I added the following "using" statement to the top of HomeController.cs using Microsoft.Web.Mvc; This time, in the About action method, I could only manage to call the non-generic RedirectToAction by typing the full commmand as follows: return Microsoft.Web.Mvc.ControllerExtensions.RedirectToAction<HomeController>(this, c => c.Index()); Even though I had a "using" statement at the top of the class, if I tried to call the non-generic RedirectToAction as follows: return RedirectToAction<HomeController>(c => c.Index()); I would get the following compile error: Error 1 The non-generic method 'System.Web.Mvc.Controller.RedirectToAction(string)' cannot be used with type arguments What gives? It's not like I'm trying to do anything out of the ordinary. It's a simple vanilla MVC2 project with only a reference to the Futures assembly. I'm hoping that I've missed out something obvious, but I've been scratching my head for too long, so I figured I'd seek some assisstance. If anyone's managed to get this simple scenario working (in VB and/or C#) could they please let me know what, if anything, they did differently? Thanks!

    Read the article

  • using Silverlight 3's HtmlPage.Window.Navigate method to reuse an already open browser window

    - by Phil
    Hi, I want to use an external browser window to implement a preview functionality in a silverlight application. There is a list of items and whenever the user clicks one of these items, it's opened in a separate browser window (the content is a pdf document, which is why it is handled ouside of the SL app). Now, to achieve this, I simply use HtmlPage.Window.Navigate(new Uri("http://www.bing.com")); which works fine. Now my client doesn't like the fact that every click opens up a new browser window. He would like to see the browser window reused every time an item is clicked. So I went out and tried implementing this: Option 1 - Use the overload of the Navigate method, like so: HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "foo"); I was assuming that the window would be reused when the same target parameter value (foo) would be used in subsequent calls. This does not work. I get a new window every time. Option 2 - Use the PopupWindow method on the HtmlPage HtmlPage.PopupWindow(new Uri("http://www.bing.com"), "blah", new HtmlPopupWindowOptions()); This does not work. I get a new window every time. Option 3 - Get a handle to the opened window and reuse that in subsequent calls private HtmlWindow window; private void navigationButton_Click(object sender, RoutedEventArgs e) { if (window == null) window = HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "blah"); else window.Navigate(new Uri("http://www.bing.com"), "blah"); if (window == null) MessageBox.Show("it's null"); } This does not work. I tried the same for the PopupWindow() method and the window is null every time, so a new window is opened on every click. I have checked both the EnableHtmlAccess and the IsPopupWindowAllowed properties, and they return true, as they should. Option 4 - Use Eval method to execute some custom javascript private const string javascript = @"var popup = window.open('', 'blah') ; if(popup.location != 'http://www.bing.com' ){ popup.location = 'http://www.bing.com'; } popup.focus();"; private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Eval(javascript); } This does not work. I get a new window every time. option 5 - Use CreateInstance to run some custom javascript on the page private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.CreateInstance("thisIsPlainHell"); } and in my aspx I have function thisIsPlainHell() { var popup = window.open('http://www.bing.com', 'blah'); popup.focus(); } Guess what? This does work. The only thing is that the window behaves a little strange and I'm not sure why: I'm behind a proxy and in all other scenarios I'm being prompted for my password. In this case however I am not (and am thus not able to reach the external site - bing in this case). This is not really a huge issue atm, but I just don't understand what's goign on here. Whenever I type another url in the address bar of the popup window (eg www.google.com) and press enter, it opens up another window and prompts me for my proxy password. As a temporary solution option 5 could do, but I don't like the fact that Silverlight is not able to manage this. One of the main reasons my client has opted for Silverlight is to be protected against all the browser specific hacking that comes with javascript. Am I doing something wrong? I'm definitely no javascript expert, so I'm hoping it's something obvious I'm missing here. Cheers, Phil

    Read the article

  • Delphi Editbox causing unexplainable errors...

    - by NeoNMD
    On a form I have 8 edit boxes that I do the exact same thing with. They are arranged in 2 sets of 4, one is Upper and the other is Lower. I kept getting errors clearing all the edit boxes so I went through clearing them 1 by 1 and found that 1 of the edit boxes just didnt work and when I tried to run the program and change that exit box it caused the debugger to jump to a point in the code with the database (even though the edit boxes had nothing to do with the database and they arent in a procedure or stack with a database in it) and say the program has access violation there. So I then removed all mention of that edit box and the code worked perfectly again, so I deleted that edit box, copied and pasted another edit box and left all values the same, then went through my code and copied the code from the other sections and simply renamed it for the new Edit box and it STILL causes an error even though it is entirely new. I cannot figure it out so I ask you all, what the hell? The editbox in question is "Edit1" unit DefinitionCoreV2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, SQLiteTable3, StdCtrls; type TDefinitionFrm = class(TForm) GrpCompetition: TGroupBox; CmbCompSele: TComboBox; BtnCompetitionAdd: TButton; BtnCompetitionUpdate: TButton; BtnCompetitionRevert: TButton; GrpCompetitionDetails: TGroupBox; LblCompetitionIDTitle: TLabel; EdtCompID: TEdit; LblCompetitionDescriptionTitle: TLabel; EdtCompDesc: TEdit; LblCompetitionNotesTitle: TLabel; EdtCompNote: TEdit; LblCompetitionLocationTitle: TLabel; EdtCompLoca: TEdit; BtnCompetitionDelete: TButton; GrpSection: TGroupBox; LblSectionID: TLabel; LblGender: TLabel; LblAge: TLabel; LblLevel: TLabel; LblWeight: TLabel; LblType: TLabel; LblHeight: TLabel; LblCompetitionID: TLabel; BtnSectionAdd: TButton; EdtSectionID: TEdit; CmbGender: TComboBox; BtnSectionUpdate: TButton; BtnSectionRevert: TButton; CmbAgeRange: TComboBox; CmbLevelRange: TComboBox; CmbType: TComboBox; CmbWeightRange: TComboBox; CmbHeightRange: TComboBox; EdtSectCompetitionID: TEdit; BtnSectionDelete: TButton; GrpSectionDetails: TGroupBox; EdtLowerAge: TEdit; EdtLowerWeight: TEdit; EdtLowerHeight: TEdit; EdtUpperAge: TEdit; EdtUpperLevel: TEdit; EdtUpperWeight: TEdit; EdtUpperHeight: TEdit; LblAgeRule: TLabel; LblLevelRule: TLabel; LblWeightRule: TLabel; LblHeightRule: TLabel; LblCompetitionSelect: TLabel; LblSectionSelect: TLabel; CmbSectSele: TComboBox; Edit1: TEdit; procedure FormCreate(Sender: TObject); procedure BtnCompetitionAddClick(Sender: TObject); procedure CmbCompSeleChange(Sender: TObject); procedure BtnCompetitionUpdateClick(Sender: TObject); procedure BtnCompetitionRevertClick(Sender: TObject); procedure BtnCompetitionDeleteClick(Sender: TObject); procedure CmbSectSeleChange(Sender: TObject); procedure BtnSectionAddClick(Sender: TObject); procedure BtnSectionUpdateClick(Sender: TObject); procedure BtnSectionRevertClick(Sender: TObject); procedure BtnSectionDeleteClick(Sender: TObject); procedure CmbAgeRangeChange(Sender: TObject); procedure CmbLevelRangeChange(Sender: TObject); procedure CmbWeightRangeChange(Sender: TObject); procedure CmbHeightRangeChange(Sender: TObject); private procedure UpdateCmbCompSele; procedure AddComp; procedure RevertComp; procedure AddSect; procedure RevertSect; procedure UpdateCmbSectSele; procedure ClearSect; { Private declarations } public { Public declarations } end; var DefinitionFrm: TDefinitionFrm; implementation {$R *.dfm} procedure TDefinitionFrm.UpdateCmbCompSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sCompTitle : string; bNext : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try CmbCompSele.Items.Clear; Repeat begin sCompTitle:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Description']); CmbCompSele.Items.Add(sCompTitle); bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.UpdateCmbSectSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sSQL : string; sSectTitle : string; bNext : boolean; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); If sltb.RowCount =0 then begin sltb := slDb.GetTable('SELECT * FROM SectionTable'); bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); end; try CmbSectSele.Items.Clear; Repeat begin sSectTitle:=sltb.FieldAsString(sltb.FieldIndex['SectionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Type'])+':'+sltb.FieldAsString(sltb.FieldIndex['Gender'])+':'+sltb.FieldAsString(sltb.FieldIndex['Age'])+':'+sltb.FieldAsString(sltb.FieldIndex['Level'])+':'+sltb.FieldAsString(sltb.FieldIndex['Weight'])+':'+sltb.FieldAsString(sltb.FieldIndex['Height']); CmbSectSele.Items.Add(sSectTitle); //CmbType.Items.Strings[sltb.FieldAsInteger(sltb.FieldIndex['Type'])] Works but has logic errors bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.AddComp; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO CompetitionTable(CompetitionID,Description) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['CompetitionID'])+1)+',"New Competition")'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbCompSele; end; procedure TDefinitionFrm.AddSect; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbSectSele; end; procedure TDefinitionFrm.RevertComp; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID))//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID=1'); try EdtCompID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); EdtCompLoca.Text:=sltb.FieldAsString(sltb.FieldIndex['Location']); EdtCompDesc.Text:=sltb.FieldAsString(sltb.FieldIndex['Description']); EdtCompNote.Text:=sltb.FieldAsString(sltb.FieldIndex['Notes']); finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.RevertSect; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; sTemp : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbSectSele.Text,0,Pos(':',CmbSectSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE SectionID='+IntToStr(iID));//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID='+EdtCompID.Text); try EdtSectionID.Text:=sltb.FieldAsString(sltb.FieldIndex['SectionID']); EdtSectCompetitionID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); Case sltb.FieldAsInteger(sltb.FieldIndex['Type']) of 1 : CmbType.ItemIndex:=0; 2 : CmbType.ItemIndex:=0; 3 : CmbType.ItemIndex:=1; 4 : CmbType.ItemIndex:=1; end; Case sltb.FieldAsInteger(sltb.FieldIndex['Gender']) of 1 : CmbGender.ItemIndex:=0; 2 : CmbGender.ItemIndex:=1; 3 : CmbGender.ItemIndex:=2; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Age']); if sTemp <> '' then begin //Decode end else begin LblAgeRule.Hide; EdtLowerAge.Text :=''; EdtLowerAge.Hide; EdtUpperAge.Text :=''; EdtUpperAge.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Level']); if sTemp <> '' then begin //Decode end else begin LblLevelRule.Hide; Edit1.Text :=''; Edit1.Hide; EdtUpperLevel.Text :=''; EdtUpperLevel.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Weight']); if sTemp <> '' then begin //Decode end else begin LblWeightRule.Hide; EdtLowerWeight.Text :=''; EdtLowerWeight.Hide; EdtUpperWeight.Text :=''; EdtUpperWeight.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Height']); if sTemp <> '' then begin //Decode end else begin LblHeightRule.Hide; EdtLowerHeight.Text :=''; EdtLowerHeight.Hide; EdtUpperHeight.Text :=''; EdtUpperHeight.Hide; end; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionAddClick(Sender: TObject); begin AddComp end; procedure TDefinitionFrm.ClearSect; begin CmbSectSele.Clear; EdtSectionID.Text:=''; EdtSectCompetitionID.Text:=''; CmbType.Clear; CmbGender.Clear; CmbAgeRange.Clear; EdtLowerAge.Text:=''; EdtUpperAge.Text:=''; CmbLevelRange.Clear; Edit1.Text:=''; EdtUpperLevel.Text:=''; CmbWeightRange.Clear; EdtLowerWeight.Text:=''; EdtUpperWeight.Text:=''; CmbHeightRange.Clear; EdtLowerHeight.Text:=''; EdtUpperHeight.Text:=''; end; procedure TDefinitionFrm.CmbCompSeleChange(Sender: TObject); begin If CmbCompSele.ItemIndex <> -1 then begin RevertComp; GrpSection.Enabled:=True; CmbSectSele.Clear; ClearSect; UpdateCmbSectSele; end; end; procedure TDefinitionFrm.BtnCompetitionUpdateClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'UPDATE CompetitionTable SET Description="'+EdtCompDesc.Text+'",Location="'+EdtCompLoca.Text+'",Notes="'+EdtCompNote.Text+'" WHERE CompetitionID ="'+EdtCompID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionRevertClick(Sender: TObject); begin RevertComp; end; procedure TDefinitionFrm.BtnCompetitionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iID : integer; begin If CmbCompSele.Text <> '' then begin If (CmbCompSele.Text[1] ='1') and (CmbCompSele.Text[2] =':') then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sSQL:= 'DELETE FROM SectionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); sSQL:= 'DELETE FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbCompSele.ItemIndex:=0; UpdateCmbCompSele; RevertComp; CmbCompSele.Text:='Select Competition'; end; end; end; procedure TDefinitionFrm.FormCreate(Sender: TObject); begin UpdateCmbCompSele; end; procedure TDefinitionFrm.CmbSectSeleChange(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionAddClick(Sender: TObject); begin AddSect; end; procedure TDefinitionFrm.BtnSectionUpdateClick(Sender: TObject); //change fields values var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iTypeCode : integer; iGenderCode : integer; sAgeStr, sLevelStr, sWeightStr, sHeightStr : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbType.Text='Fighting' then iTypeCode := 1 else iTypeCode := 3; If CmbGender.Text='Male' then iGenderCode := 1 else if CmbGender.Text='Female' then iGenderCode := 2 else iGenderCode := 3; Case CmbAgeRange.ItemIndex of 0:sAgeStr := 'o-'+EdtLowerAge.Text; 1:sAgeStr := 'u-'+EdtLowerAge.Text; 2:sAgeStr := EdtLowerAge.Text+'-'+EdtUpperAge.Text; end; Case CmbLevelRange.ItemIndex of 0:sLevelStr := 'o-'+Edit1.Text; 1:sLevelStr := 'u-'+Edit1.Text; 2:sLevelStr := Edit1.Text+'-'+EdtUpperLevel.Text; end; Case CmbWeightRange.ItemIndex of 0:sWeightStr := 'o-'+EdtLowerWeight.Text; 1:sWeightStr := 'u-'+EdtLowerWeight.Text; 2:sWeightStr := EdtLowerWeight.Text+'-'+EdtUpperWeight.Text; end; Case CmbHeightRange.ItemIndex of 0:sHeightStr := 'o-'+EdtLowerHeight.Text; 1:sHeightStr := 'u-'+EdtLowerHeight.Text; 2:sHeightStr := EdtLowerHeight.Text+'-'+EdtUpperHeight.Text; end; sSQL:= 'UPDATE SectionTable SET Type="'+IntToStr(iTypeCode)+'",Gender="'+IntToStr(iGenderCode)+'" WHERE SectionID ="'+EdtSectionID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnSectionRevertClick(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin If CmbSectSele.Text[1] ='1' then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'DELETE FROM SectionTable WHERE SectionID='+CmbSectSele.Text[1]+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbSectSele.ItemIndex:=0; UpdateCmbSectSele; RevertSect; CmbSectSele.Text:='Select Competition'; end; end; procedure TDefinitionFrm.CmbAgeRangeChange(Sender: TObject); begin Case CmbAgeRange.ItemIndex of 0: begin EdtLowerAge.Show; LblAgeRule.Caption:='Over and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 1: begin EdtLowerAge.Show; LblAgeRule.Caption:='Under and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 2: begin EdtLowerAge.Show; LblAgeRule.Caption:='LblAgeRule'; LblAgeRule.Hide; EdtUpperAge.Show; end; end; end; procedure TDefinitionFrm.CmbLevelRangeChange(Sender: TObject); begin Case CmbLevelRange.ItemIndex of 0: begin Edit1.Show; LblLevelRule.Caption:='Over and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 1: begin Edit1.Show; LblLevelRule.Caption:='Under and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 2: begin Edit1.Show; LblLevelRule.Caption:='LblLevelRule'; LblLevelRule.Hide; EdtUpperLevel.Show; end; end; end; procedure TDefinitionFrm.CmbWeightRangeChange(Sender: TObject); begin Case CmbWeightRange.ItemIndex of 0: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Over and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 1: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Under and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 2: begin EdtLowerWeight.Show; LblWeightRule.Caption:='LblWeightRule'; LblWeightRule.Hide; EdtUpperWeight.Show; end; end; end; procedure TDefinitionFrm.CmbHeightRangeChange(Sender: TObject); begin Case CmbHeightRange.ItemIndex of 0: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Over and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 1: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Under and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 2: begin EdtLowerHeight.Show; LblHeightRule.Caption:='LblHeightRule'; LblHeightRule.Hide; EdtUpperHeight.Show; end; end; end; end.

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • Is it bad practice to apply class-based design to JavaScript programs?

    - by helixed
    JavaScript is a prototyped-based language, and yet it has the ability to mimic some of the features of class-based object-oriented languages. For example, JavaScript does not have a concept of public and private members, but through the magic of closures, it's still possible to provide the same functionality. Similarly, method overloading, interfaces, namespaces and abstract classes can all be added in one way or another. Lately, as I've been programming in JavaScript, I've felt like I'm trying to turn it into a class-based language instead of using it in the way it's meant to be used. It seems like I'm trying to force the language to conform to what I'm used to. The following is some JavaScript code I've written recently. It's purpose is to abstract away some of the effort involved in drawing to the HTML5 canvas element. /* Defines the Drawing namespace. */ var Drawing = {}; /* Abstract base which represents an element to be drawn on the screen. @param The graphical context in which this Node is drawn. @param position The position of the center of this Node. */ Drawing.Node = function(context, position) { return { /* The method which performs the actual drawing code for this Node. This method must be overridden in any subclasses of Node. */ draw: function() { throw Exception.MethodNotOverridden; }, /* Returns the graphical context for this Node. @return The graphical context for this Node. */ getContext: function() { return context; }, /* Returns the position of this Node. @return The position of this Node. */ getPosition: function() { return position; }, /* Sets the position of this Node. @param thePosition The position of this Node. */ setPosition: function(thePosition) { position = thePosition; } }; } /* Define the shape namespace. */ var Shape = {}; /* A circle shape implementation of Drawing.Node. @param context The graphical context in which this Circle is drawn. @param position The center of this Circle. @param radius The radius of this circle. @praram color The color of this circle. */ Shape.Circle = function(context, position, radius, color) { //check the parameters if (radius < 0) throw Exception.InvalidArgument; var node = Drawing.Node(context, position); //overload the node drawing method node.draw = function() { var context = this.getContext(); var position = this.getPosition(); context.fillStyle = color; context.beginPath(); context.arc(position.x, position.y, radius, 0, Math.PI*2, true); context.closePath(); context.fill(); } /* Returns the radius of this Circle. @return The radius of this Circle. */ node.getRadius = function() { return radius; }; /* Sets the radius of this Circle. @param theRadius The new radius of this circle. */ node.setRadius = function(theRadius) { radius = theRadius; }; /* Returns the color of this Circle. @return The color of this Circle. */ node.getColor = function() { return color; }; /* Sets the color of this Circle. @param theColor The new color of this Circle. */ node.setColor = function(theColor) { color = theColor; }; //return the node return node; }; The code works exactly like it should for a user of Shape.Circle, but it feels like it's held together with Duct Tape. Can somebody provide some insight on this?

    Read the article

  • operator overloading of stream extraction operator in C++ help

    - by Crystal
    I'm having some trouble overloading my stream extraction operator in C++ for a hw assignment. I'm not really sure why I am getting these compile errors since I thought I was doing it right... Here is my code: Complex.h #ifndef COMPLEX_H #define COMPLEX_H class Complex { //friend ostream &operator<<(ostream &output, const Complex &complexObj) const; public: Complex(double = 0.0, double = 0.0); // constructor Complex operator+(const Complex &) const; // addition Complex operator-(const Complex &) const; // subtraction void print() const; // output private: double real; // real part double imaginary; // imaginary part }; #endif Complex.cpp #include <iostream> #include "Complex.h" using namespace std; // Constructor Complex::Complex(double realPart, double imaginaryPart) : real(realPart), imaginary(imaginaryPart) { } // addition operator Complex Complex::operator+(const Complex &operand2) const { return Complex(real + operand2.real, imaginary + operand2.imaginary); } // subtraction operator Complex Complex::operator-(const Complex &operand2) const { return Complex(real - operand2.real, imaginary - operand2.imaginary); } // Overload << operator ostream &Complex::operator<<(ostream &output, const Complex &complexObj) const { cout << '(' << complexObj.real << ", " << complexObj.imaginary << ')'; return output; // returning output allows chaining } // display a Complex object in the form: (a, b) void Complex::print() const { cout << '(' << real << ", " << imaginary << ')'; } main.cpp #include <iostream> #include "Complex.h" using namespace std; int main() { Complex x; Complex y(4.3, 8.2); Complex z(3.3, 1.1); cout << "x: "; x.print(); cout << "\ny: "; y.print(); cout << "\nz: "; z.print(); x = y + z; cout << "\n\nx = y + z: " << endl; x.print(); cout << " = "; y.print(); cout << " + "; z.print(); x = y - z; cout << "\n\nx = y - z: " << endl; x.print(); cout << " = "; y.print(); cout << " - "; z.print(); cout << endl; } Compile erros: complex.cpp(23) : error C2039: '<<' : is not a member of 'Complex' complex.h(5) : see declaration of 'Complex' complex.cpp(24) : error C2270: '<<' : modifiers not allowed on nonmember functions complex.cpp(25) : error C2248: 'Complex::real' : cannot access private member declared in class 'Complex' complex.h(13) : see declaration of 'Complex::real' complex.h(5) : see declaration of 'Complex' complex.cpp(25) : error C2248: 'Complex::imaginary' : cannot access private member declared in class 'Complex' complex.h(14) : see declaration of 'Complex::imaginary' complex.h(5) : see declaration of 'Complex' Thanks!

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Overloading '-' for array subtraction

    - by Chris Wilson
    I am attempting to subtract two int arrays, stored as class members, using an overloaded - operator, but I'm getting some peculiar output when I run tests. The overload definition is Number& Number :: operator-(const Number& NumberObject) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= NumberObject.Value[count]; } return *this; } Whenever I run tests on this, NumberObject.Value[count] always seems to be returning a zero value. Can anyone see where I'm going wrong with this? The line in main() where this subtraction is being carried out is cout << "The difference is: " << ArrayOfNumbers[0] - ArrayOfNumbers[1] << endl; ArrayOfNumbers contains two Number objects. The class declaration is #include <iostream> using namespace std; class Number { private: int Value[50]; int NumberSize; public: Number(); // Default constructor Number(const Number&); // Copy constructor Number(int, int); // Non-default constructor void SetMemberValues(int, int); // Manually set member values int GetNumberSize() const; // Return NumberSize member int GetValue() const; // Return Value[] member Number& operator-=(const Number&); }; inline Number operator-(Number Lhs, const Number& Rhs); ostream& operator<<(ostream&, const Number&); The full class definition is as follows: #include <iostream> #include "../headers/number.h" using namespace std; // Default constructor Number :: Number() {} // Copy constructor Number :: Number(const Number& NumberObject) { int Temp[NumberSize]; NumberSize = NumberObject.GetNumberSize(); for (int count = 0; count < NumberObject.GetNumberSize(); count ++) { Temp[count] = Value[count] - NumberObject.GetValue(); } } // Manually set member values void Number :: SetMemberValues(int NewNumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NewNumberValue % 10; NewNumberValue = NewNumberValue / 10; } } // Non-default constructor Number :: Number(int NumberValue, int NewNumberSize) { NumberSize = NewNumberSize; for (int count = NewNumberSize - 1; count >= 0; count --) { Value[count] = NumberValue % 10; NumberValue = NumberValue / 10; } } // Return the NumberSize member int Number :: GetNumberSize() const { return NumberSize; } // Return the Value[] member int Number :: GetValue() const { int ResultSoFar; for (int count2 = 0; count2 < NumberSize; count2 ++) { ResultSoFar = ResultSoFar * 10 + Value[count2]; } return ResultSoFar; } Number& operator-=(const Number& Rhs) { for (int count = 0; count < NumberSize; count ++) { Value[count] -= Rhs.Value[count]; } return *this; } inline Number operator-(Number Lhs, const Number& Rhs) { Lhs -= Rhs; return Lhs; } // Overloaded output operator ostream& operator<<(ostream& OutputStream, const Number& NumberObject) { OutputStream << NumberObject.GetValue(); return OutputStream; }

    Read the article

  • Problem with incomplete type while trying to detect existence of a member function

    - by abir
    I was trying to detect existence of a member function for a class where the function tries to use an incomplete type. The typedef is struct foo; typedef std::allocator<foo> foo_alloc; The detection code is struct has_alloc { template<typename U,U x> struct dummy; template<typename U> static char check(dummy<void* (U::*)(std::size_t),&U::allocate>*); template<typename U> static char (&check(...))[2]; const static bool value = (sizeof(check<foo_alloc>(0)) == 1); }; So far I was using incomplete type foo with std::allocator without any error on VS2008. However when I replaced it with nearly an identical implementation as template<typename T> struct allocator { T* allocate(std::size_t n) { return (T*)operator new (sizeof(T)*n); } }; it gives an error saying that as T is incomplete type it has problem instantiating allocator<foo> because allocate uses sizeof. GCC 4.5 with std::allocator also gives the error, so it seems during detection process the class need to be completely instantiated, even when I am not using that function at all. What I was looking for is void* allocate(std::size_t) which is different from T* allocate(std::size_t). My questions are (I have three questions, but as they are correlated , so I thought it is better not to create three separate questions). Why MS std::allocator doesn't check for incomplete type foo while instantiating? Are they following any trick which can be implemented ? Why the compiler need to instantiate allocator<T> to check the existence of the function when sizeof is not used as sfinae mechanism to remove/add allocate in the overload resolutions set? It should be noted that, if I remove the generic implementation of allocate leaving the declaration only, and specialized it for foo afterwards such as struct foo{}; template< struct allocator { foo* allocate(std::size_t n) { return (foo*)operator new (sizeof(foo)*n); } }; after struct has_alloc it compiles in GCC 4.5 while gives error in VS2008 as allocator<T> is already instantiated and explicit specialization for allocator<foo> already defined. Is it legal to use nested types for an std::allocator of incomplete type such as typedef foo_alloc::pointer foo_pointer; ? Though it is practically working for me, I suspect the nested types such as pointer may depend on completeness of type it takes. It will be good to know if there is any possible way to typedef such types as foo_pointer where the type pointer depends on completeness of foo. NOTE : As the code is not copy paste from editor, it may have some syntax error. Will correct it if I find any. Also the codes (such as allocator) are not complete implementation, I simplified and typed only the portion which I think useful for this particular problem.

    Read the article

  • Binary operator overloading on a templated class (C++)

    - by GRB
    Hi all, I was recently trying to gauge my operator overloading/template abilities and as a small test, created the Container class below. While this code compiles fine and works correctly under MSVC 2008 (displays 11), both MinGW/GCC and Comeau choke on the operator+ overload. As I trust them more than MSVC, I'm trying to figure out what I'm doing wrong. Here is the code: #include <iostream> using namespace std; template <typename T> class Container { friend Container<T> operator+ <> (Container<T>& lhs, Container<T>& rhs); public: void setobj(T ob); T getobj(); private: T obj; }; template <typename T> void Container<T>::setobj(T ob) { obj = ob; } template <typename T> T Container<T>::getobj() { return obj; } template <typename T> Container<T> operator+ <> (Container<T>& lhs, Container<T>& rhs) { Container<T> temp; temp.obj = lhs.obj + rhs.obj; return temp; } int main() { Container<int> a, b; a.setobj(5); b.setobj(6); Container<int> c = a + b; cout << c.getobj() << endl; return 0; } This is the error Comeau gives: Comeau C/C++ 4.3.10.1 (Oct 6 2008 11:28:09) for ONLINE_EVALUATION_BETA2 Copyright 1988-2008 Comeau Computing. All rights reserved. MODE:strict errors C++ C++0x_extensions "ComeauTest.c", line 27: error: an explicit template argument list is not allowed on this declaration Container<T> operator+ <> (Container<T>& lhs, Container<T>& rhs) ^ 1 error detected in the compilation of "ComeauTest.c". I'm having a hard time trying to get Comeau/MingGW to play ball, so that's where I turn to you guys. It's been a long time since my brain has melted this much under the weight of C++ syntax, so I feel a little embarrassed ;). Thanks in advance. EDIT: Eliminated an (irrelevant) lvalue error listed in initial Comeau dump.

    Read the article

  • Outlook 2007 Backup to D:\Outlook Fails - Access Denied, Write-Protected or File In Use

    - by nicorellius
    I can successfully save the Outlook PST file to the default location on the C drive (C:\Documents and Settings\user\ ... \Outlook) but when I change the backup save to directory to Outlook on the D drive I get the error: Cannot copy Outlook: Access is denied. Make sure the disk is not full or write protected and that the file is not currently in use. I suppose it is not that crucial that I save this file here, but I have never seen this problem before and I have made this same change in the past. I did some searching in this knowledge exchange as well as elsewhere on changing permissions, etc, but this didn't help. I discovered that the folder on my D drive (called Outlook) is not write-protected and nor is it read-only, as I can save to and modify files in that directory, as well as rename and delete the directory itself. At the time when I installed this version of Outlook, I used a previously saved Personal Folder (a backup PST file) and I thought having this still open in Outlook was causing the trouble. But I closed it and still have the same problem. I know this is probably a silly error on my part but I would like to figure it out. I'm new to superuser, but the answers I see are usually very good, so I thought I would post my first question. Thanks in advance.

    Read the article

  • Task scheduler "hidden" only hides task, not process

    - by Brandi
    I am trying to make an application that acts like a desktop application for all the computers in our network. I have already got a windows forms app that works like I want it to, and I'm using the task scheduler to start it on login. We would really like it if the process as well as the task is hidden from the task manager in order to avoid accidental deletion. Selecting "Hidden" in the task scheduler hides the task (good!) but the process is still visible (not good enough). I tried using the option to run as "SYSTEM" or "LOCAL SERVICE" so that the user would get "access denied" when trying to delete or just wouldn't even view it by default. However, running as a service makes the process invisible on Vista and 7, and the point of my app is to display information interactively. (User can click, sort, etc). Is there any other alternatives to either run the process as someone/something besides the logged in user and still have the logged on user be able to see and interact with it? (Therefore it would list as someone else's app?) From what I've read on the internet, the only ways to actually hide something from the task manager seem hacky and/or difficult and rather involved. I don't really want to write a bunch of C or whatever only to maybe not have it work on Vista/7 anyway. Besides which, for a legitimate app with a legitimate use, I shouldn't have to go to those extremes... I see "Access Denied" all the time for system processes... why is it so hard for me to do the same? So does anyone have any simple solutions? Is it easier than I think to just list something in the task manager as another user? Thanks in advance for any replies.

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >