Search Results

Search found 6101 results on 245 pages for 'incremental backup'.

Page 195/245 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Increase Max Pool Size ERROR when using SYBASE ASE ADO.NET data provider

    - by Brani
    I have made a program in VB.net (visual studio 2003) that connects to a SYBASE ASE database using the ADO.NET data provider. Recently, after a hard disk failure, I restored the program's code from a (rather old) backup. But now the connection fails with a message that does not remind me of anything that I have seen before. Here is the code and the error message: Dim cn As New AseConnection("Data Source='my_server';Port='5000';UID='sa';PWD='my_pwd';Database='my_db';") cn.Open() Error message: Sybase.Data.AseClient.AseException - Cannot allocate more connections. Connection pool is at maximum. Increase Max Pool Size Can anybody help me?

    Read the article

  • Server 2008 Task Scheduler Mapped Drive Access C#

    - by user219313
    I'm trying do get Server 2008's Task Scheduler to run a C# console app which backs up data to a mapped backup drive somewhere on FastHosts network. I've written a test app which simply does this Directory.CreateDirectory("Z:\" + DateTime.Now.Ticks.ToString()); i.e. just creates a directory on the root of this Z drive. This works fine when I just run the .exe but when I schedule it in Task Scheduler it says the task has completed with return code 3762507597 - I can't find any info on what this means. I'm running the task with the highest Admin privelages as far as I can see.

    Read the article

  • WCF Error - Security processor was unable to find a security header in the message

    - by quinntheeskimo
    Hi, I'm getting what appears now to be a security error in my WCF Service. Originally my error was about a falted state(removed using around client proxy to clear this error), but have found more information through enabling trace. I have been unable to get my solution running after encountering this error, and even my backup copy now gets the same error. I'm not sure what has caused this to happen, I undone the changes I made (nothing relating to WCF) and still get the same error. The error from trace is - System.ServiceModel.Security.MessageSecurityException: Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security. I'm not really sure what I need to do to fix this, any help would be usefull. The application was previously working.

    Read the article

  • Log4net RollingFileAppender Size rollingStyle file extension

    - by BrettRobi
    I am using the RollingFileAppender and the Size rollingStyle. By default it creates backup files with a numbered extension, this drives me nuts. Is it possible to change it so it always uses a defined extension (say .txt or .log) and inserts the number as part of the file name. For example: myapp.log myapp.1.log myapp.2.log myapp.3.log Here is my current configuration: <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="myapp.log"/> <appendToFile value="true"/> <rollingStyle value="Size"/> <maximumFileSize value="1MB"/> <maxSizeRollBackups value="10"/> <staticLogFileName value="true"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date{ISO8601} [%3thread] %-5level %logger{3}: %message%newline" /> </layout> </appender>

    Read the article

  • WAMP Apache 403 Forbidden error

    - by jme
    I have recently changed from Windows XP to Windows 7 and have reinstalled WAMP server on my localhost testing pc. I have copied over my backup site too and this is working correctly. I have installed tinyMCE javascript editor as well as ajaxfilemanager. Everything works except when accessing a file i get the following error: Forbidden You don't have permission to access /catalog/admin/includes/javascript/tinymce/plugins/ajaxfilemanager/ajaxfilemanager.php on this server. I can access this file/folder through Windows explorer and all other parts of my site works (php, js in other section etc). Thanks for any help!

    Read the article

  • Subtotal error on calculated field in a Reporting Services Matrix

    - by peacedog
    I've got a Reporting Services report that has two row groups: Category and SubCategory. For columns it has LastYearDataA, ThisYearDataA, LastYearDataB, ThisYearDataB. I added two columns (one for A and one for B) to handle an expression calculation (to show a percentage different from LastYear to ThisYear for each). That's working. The problem comes in the SubTotal for each category. The raw numbers are totaling correctly. If SubCat1 has 10//5 for LastYear/ThisYear A, and SubCat2 has 5//1, then I get 15/5 for the totals. But I get the percentage reported in the total column as "50%", matching SubCat1. Percentages for each Subcategory are being calculated correctly (according to my backup math, anyway). But the sub total % always matches the first SubCategory in the group. Is this impossible to do in Reporting Services 2005?

    Read the article

  • Removing offline/defunct files in SQL server 2008

    - by philox
    How to remove traces of files marked as OFFLINE or DEFUNCT in Microsoft SQL server 2008? I have been playing around with a setup where I create a database with 3 file-groups which are: Primary, FileGroupData and FileGroupIndex. The clustered index is using FileGroupData and a non-clustered index is set to use FileGroupIndex. To simulate a disk failure I've shut down SQL server and manually deleted the files in index file-group. To start the database I'll mark the files 'OFFLINE', but after that I can't delete the index files, which are now offline. I don't have backup of the files as they are merely indices, but that has the implication that I can't restore the files and have their status as "ONLINE". How would you recommend removing the files and the file-group as they still show up in management studio under files/file-groups. Management studio is not able to delete them. As far as I can tell this is different from the question posted in : http://stackoverflow.com/questions/462637/how-do-i-remove-offline-files-from-a-sql-server-2005-database /Philip

    Read the article

  • BizTalk server problem

    - by WtFudgE
    Hi, we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept. Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince. Now recently we have a lot of problems. Allow me to situate in detail, so I'm not missing anything: Our server has 5 applications: One with 3 orchestrations, 12 send ports, 16 receive locations. One with 4 orchestrations, 32 send ports, 20 receive locations. One with 4 orchestrations, 24 send ports, 20 receive locations. One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations. One with common application with a couple of resources. Our problems have occured since we deployed the applications with the 47 orchestrations. A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it. The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP. Each of these types have a different host instances, to balance out the load. Our orchestrations use the BiztalkApplication host. On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better. Now the problems we are having recently are mainly two major problems: Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed. We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem. No errors can be found in the event viewer either. A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped. In the event viewer we noticed the following errors (these are more than one): The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.". The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. ". The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed. Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection." It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down. The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department. I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc? I appreciate any answers because these issues are getting worse and we're also on a deadline. Thanks a lot for replies!

    Read the article

  • Eclipse Could not Delete error

    - by KáGé
    Hello I'm working on a project with Eclipse and by now everything was fine, but last time I've tried building it, it returned the error "The project was not built due to "Could not delete '/Torpedo/bin/bin'.". Fix the problem, then try refreshing this project and building it since it may be inconsistent Torpedo Unknown Java Problem" and it deleted my bin folder which stores all the images and stuff needed for the program. (Fortunately I had a backup). I've tried googling it and tried every solution I found, but nothing helped, and also most of them suggests to delete the folder by hand, which I can't. What should I do? Thanks.

    Read the article

  • Finding the Geo-location on a Blackberry?

    - by Frederico
    I'm running into an issue when trying to geolocate users whom are using blackberry devices. Currently there are a few checks that I go through to geolocate the individual the first, using the navigator paramater inside browsers. if(navigator.geolocation) if this fails then I have a backup using a free service (for testing) from maxmind: See Here Yet this doesn't return back the city at all either. I've then tried using the JSPI that google maps proviedes, and echoing out the google.loader.ClientLocation: if (google.loader.ClientLocation != null) { document.write("Your Location Is: " + google.loader.ClientLocation.address.city + ", " + google.loader.ClientLocation.address.region + " lat: " + google.loader.ClientLocation.latitude + " Long: " + google.loader.ClientLocation.longitude); } else { document.write("Your Location Was Not Detected By Google Loader"); } When this didn't work I tried following the google maps 3.0 detect location seen here: Detecting Location I've done all this after seeing this work correctly in google latitude.. so I know there has to be a way to get the location... any thoughts, ideas on what I could possibly try? Thank you kindly

    Read the article

  • SQL Server Full-Text Search: Hung processes with MSSEARCH wait type

    - by CheeseInPosition
    We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post[1], it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them? [1] [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1] “Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”

    Read the article

  • How to find working directory which works between different computers. - C

    - by Jamie Keeling
    Hello, I am running two processes,Process A is opened by Process B using the following example: createProcessHandle = CreateProcess( TEXT("C:\\Users\Jamie\\Documents\\Application\\Debug\\ProcessA.exe"), TEXT(""), NULL, NULL, FALSE, 0, NULL, NULL, &startupinfo, &process_information ); As you can see the Process is reliant on the path given to it, the problem I have is that if I change the location of my ProcessA.exe (Such as a backup/duplicate) it's a tiresome process to keep recoding the path. I want to be able to make it run no matter where it is without having to recode the path manually. Can anybody suggest a solution to this?

    Read the article

  • Database entries existence depends on time / boolean value of a field changed automatically

    - by lisak
    Hey, I have this situation here. An auction system listing orders that are "active" (their deadline didn't occur yet) There is a lot of orders so it is better to have a field "active" instead of listing them based on time queries I'm not a database expert, just a user. What is the best way to implement this scenario ? Do I have to manually check the "deadLine" field and change "active" status every once in a while ? Is Mysql able to change the field automatically ? How demanding are queries of type "select orders where "deadline" has passed " Do I need to use TIMESTAMP (long data type of number of milisecond since UTC epoch time or DATETIME for the queries to the database to be more efficient ? Finally I have to move old order entries to a different backup table .

    Read the article

  • Restore VisualSVN server from client copy.

    - by Kevin
    I am running VisualSVN on a windows VM box. The VM crashed and corrupted the image. After restoring an older image (2007) we discovered that our data backup is not functioning properly. Hence I have a bunch of projects (~20) siting on my laptop (client side) and I want to push them back into the VisualSVN Server, which is now empty. I know this can be done by simply adding the project files manually, but this is going to take along time because I don't want to include every file (i.e. complied files). Any suggestions would be greatly appreciated.

    Read the article

  • All GIT Repos Corrupted on System Restore

    - by yar
    I restored my OSX today by copying the system over from a backup. Most things seem to be working, but every single GIT repo gives pretty much the same error fatal: object 03b45161eb27228914e690e032ca8009358e9588 is corrupted I have tried chowning, doing everything as sudo or root... I have no idea what to try next. This would be a normal git question except that it's on many repos. Ideas? Note: I'm using git 1.7.0.3 and I was probably using 1.7.0 before.

    Read the article

  • Can I use imp/exp tools to migrate database from Oracle 9 to Oracle 10

    - by Karol Kolenda
    I'm subcontractor and my client wants to upgrade Oracle database from 9 to 10. Other vendor is going to perform the upgrade process, and I was asked to create whatever backup I need before the upgrade, and then recreate the environment in Oracle 10. All my data is stored in a separate database in a single schema. No fancy relations, scripts or anything like this (actual app supports different dbs: Oracle, SQL Server, Postgres so we want to avoid any DB-specific code). I was hoping to use imp/exp but I'm not sure if imp/exp are backward compatible (exp from O9 and imp to O10)? If there is a better/recommended way of dealing with similar situation, I'll be grateful for any advice.

    Read the article

  • Undo "Upgrade Current Target for iPad?

    - by Moshe
    I've upgraded current Target for iPad and I dodn't like the result. Now, i've tried to downgrade by deleting files but it's not working. Help! Do I need to change project settings? Does XCode keep a backup of the project? What to do... It doesn't run on iPhone anymore... EDIT: The console crash log on the iPhone Simulator: 2010-05-10 00:11:02.455 iDecide[9743:207] Unknown class iDecideAppDelegate in Interface Builder file. 2010-05-10 00:11:02.456 iDecide[9743:207] Unknown class iDecideViewController in Interface Builder file. 2010-05-10 00:11:02.465 iDecide[9743:207] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UICustomObject 0x391eb80> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key viewController.' 2010-05-10 00:11:02.466 iDecide[9743:207] Stack: ( 34047067, 2420679945, 34206145, 215656, 214197, 4551796, 33949999, 4546347, 4554615, 2715730, 2754518, 2743092, 2725503, 2752609, 39038297, 33831808, 33827912, 2719253, 2756527 )

    Read the article

  • Git diff gone mad?

    - by dr Hannibal Lecter
    I'm trying to figure out what's going on with my local Git repo. I edit a file. Git reports everything has changed in the file (I only changed one line) At first I think "must be a newline problem", but it's not. I do a diff in TortoiseGit, everything looks fine. I do a diff with Netbeans (git plugin), everything seems fine. I do a reset, backup the file, modify it, git again reports everything has changed. I do a binary compare in Total Commander, the files have no differences except for the single line I changed. I do a hard reset again. Git tells me it was done successfully. Git status still says my file has changed. I diff the thing and there are no differences - bug git says there are. I've tried using both git bash and gui, with same results (I'm on Windows). Any clues, what's going on here?

    Read the article

  • Intellij Grails and Git

    - by WaZ
    I want to backup my code using smart Git. As a start I am a bit confused, IntelliJ has created two folders for my GRails project: these reside in 1) C:\Documents and Settings\me\.grails\1.2.1\projects and 2) C:\Documents and Settings\me\IdeaProjects\ The 1) contains a plugins folder which contains directories and files of plugins I am using inside my project. Do I have to include both 1) and 2) directories inside GIT? If yes what can I ignore? If no which of the files do I have to include Thanks, Much appreciated, WB

    Read the article

  • Error 2006: "MySQL server has gone away" using Python, Bottle Microframework and Apache

    - by Jamie
    After accessing my web app using: - Python 2.7 - the Bottle micro framework v. 0.10.6 - Apache 2.2.22 - mod_wsgi - on Ubuntu Server 12.04 64bit; I'm receiving this error after several hours: OperationalError: (2006, 'MySQL server has gone away') I'm using MySQL - the native one included in Python. It usually happens when I don't access the server. I've tried closing all the connections, which I do, using this: cursor.close() db.close() where db is the standard MySQLdb.Connection() call. The my.cnf file looks something like this: key_buffer = 16M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 It is the default configuration file except max_allowed_packet is 128M instead of 16M. The queries to the database are quite simple, at most they retrieve approximately 100 records. Can anyone help me fix this? One idea I did have was use try/except but I'm not sure if that would actually work. Thanks in advance, Jamie

    Read the article

  • NO-SQL reliable for small bussines app?

    - by mamcx
    I'm deciding between go for a NON-SQL engine or a regular SQL one for a document managment system for small bussines. I have experience with firebird/sql server and found a good track of reliability (specially with firebird). This market is full of crappy "servers" (clon-made PC, the mayority), cheap harddisk, rarely use of RAID or anything like that, some are in locations where a power-off is normal, some not have a UPS, etc... (I will include off-site auto-backup to external servers, but that no change the internal setup). (I know about end-user education about such proper setups, but is stupid depend on that, so stick to te point) From the desing point of view, a schema-less database is the way to go for my system, but, I worry if any of the actual solutions (MongoDb, Tokyo Cabinet, etc) are like firebird and survice crash, malfunctions & abuse so data corruption is very rare. The plan is store the office documents there & provide a central repository.

    Read the article

  • Postgresql - one database for everyone, or one-database per customer

    - by user337876
    I'm working on a web-based business application where each customer will need to have their own data (think basecamphq.com type model) For scalability and ease-of-upgrades, I'd prefer to have a single database where each customer gets a filtered version of the data. The problem is how to guarantee that they stay sandboxed to their own data. Trying to enforce it in code seems like a disaster waiting to happen. I know Oracle has a way to append a where clause to every query based on a login id, but does Postgresql have anything similar? If not, is there a different design pattern I could use (like creating a view of each table for each customer that filters)? Worse case scenario, what is the performance/memory overhead of having 1000 100M databases vs having a single 1Tb database? I will need to provide backup/restore functionality on a per-customer basis which is dead-simple on a single database but quite a bit trickier if they are sharing the database with other customers.

    Read the article

  • Resetting or refreshing a database connection

    - by cdonner
    This Android application on Google uses the following method to refresh the database after replacing the database file with a backup: public void resetDbConnection() { this.cleanup(); this.db = SQLiteDatabase.openDatabase( "/data/data/com.totsp.bookworm/databases/bookworm.db", null, SQLiteDatabase.OPEN_READWRITE); } I did not build this app, and I am not sure what happens. I am trying to make this idea work in my own application, but the data appears to be cached by the views, and the app continues to show data from the database that was replaced, even after I call cleanup() and reopen the database. I have to terminate and restart the activity in order to see the new data. I tried to call invalidate on my TabHost view, which pretty much contains everything. I thought that the views would redraw and refresh their underlying data, but this did also not have the expected result. I ended up restarting the activity programmatically, which works, but this seems to be a drastic measure. Is there a better way?

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >