Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 365/394 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • How to rename Java packages without breaking Subversion history?

    - by M. Joanis
    Hello everyone, The company I'm working for is starting up and they changed their name in the process. So we still use the package name com.oldname because we are afraid of breaking the file change history, or the ancestry links between versions, or whatever we could break (I don't think I use the right terms, but you get the concept). We use: Eclipse, TortoiseSVN, Subversion I found somewhere that I should do it in many steps to prevent incoherence between content of .svn folders and package names in java files: First use TortoiseSVN to rename the directory, updating the .svn directories. Then, manually rename the directory back to the original name. To finally use Eclipse to rename the packages (refactor) back to the new name, updating the java files. That seems good to me, but I need to know if the ancestry and history and everything else will still be coherent and working well. I don't have the keys to that server, that's why I don't hastily backup things and try one or two things. I would like to come up with a good reason not to do it, or a way of doing it which works. Thank you for your help, M. Joanis

    Read the article

  • Full text index requires dropping and recreating - why?

    - by Amjid Qureshi
    Hi all, So I've got a web app running on .net 3.5 connected to a SQL 2005 box. We do scheduled releases every 2 weeks. About 14 tables out of 250 are full text indexed. After not every release, but a few too many, the indexes crap out. They seem to have data in there, but when we try to search them from the front end or SQL enterprise we get timeouts/hangs. We have a script that disables the indexes, drops them, deletes the catalog and then re creates the indexes. This fixes the problem 99 times out of 100. and the one other time, we run the script again and it all works We have tried just rebuilding the fulltext index but that doesn't fix the issue. My question is why do we have to do this ? what can we do to sort the index out? Here is a bit of the script, IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) ALTER FULLTEXT INDEX ON [dbo].[Address] DISABLE GO IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) DROP FULLTEXT INDEX ON [dbo].[Address] GO IF EXISTS (SELECT * FROM sysfulltextcatalogs ftc WHERE ftc.name = N'DbName.FullTextCatalog') DROP FULLTEXT CATALOG [DbName.FullTextCatalog] GO -- may need this line if we get an error BACKUP LOG SMS2 WITH TRUNCATE_ONLY CREATE FULLTEXT CATALOG [DbName.FullTextCatalog] ON FILEGROUP [FullTextCatalogs] IN PATH N'F:\Data' AS DEFAULT AUTHORIZATION [dbo] CREATE FULLTEXT INDEX ON [Address](CommonPlace LANGUAGE 'ENGLISH') KEY INDEX PK_Address ON [DbName.FullTextCatalog] WITH CHANGE_TRACKING AUTO go

    Read the article

  • Version control for PHP Developement

    - by Vinod Mohan
    Hello, I have been hearing a lot about the advantages of using a version control system and would like to try one. I was doing freelance web development in PHP for the past 2 years, two months back I hired two more programmer to help me. I will be hiring one more person soon. We maintain 4 websites, all of which are my own, which are continuously being edited by one of us. I learned PHP by myself and have never worked in any other firms. Hence I am new to version control, unit testing and all. Currently, we have development servers on our workstations. When we edit a particular section of a site, we download the code for that particular section (say /news/ or /movies/ or /wallpapers/ ) from the production server to the dev server, makes the changes locally and uploads to the production server (no code review / testing). Because of this, our dev server is always out of date from our prod server. Occasionally, this also create problem when one of us forgets to download the latest copy from prod and overwrites the last change. I know this is very very foolish, but currently our prod server is the only copy that has all the updates and latest changes. Can anyone please suggest which is the best version control system for me? I am more interested in distributed version control, since we don't have a central backup for all our code. I read about Mercurial and Git and found that Mercurial is used in several large open source projects by Mozilla, Sun, Symbian etc. So which one do you think is better for me? Not only version control, if there are any other package that I can use to make my current setup better, please mention that too :) Thanks and Regards, Vinod Mohan

    Read the article

  • Why does Fabric display the disconnect from server message for almost 2 minutes?

    - by Matthew Rankin
    Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Thoughts on Cause of the Issue I don't know how long the problem has existed. However, I know that at one point I didn't have this problem. Things that have changed since then are that I have recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv.

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Command not write in buffer with Expect

    - by Romuald
    Hello, I try to backup a Linkproof device with expect script and i have some trouble. It's my first script in expect and i have reach my limits ;) #!/usr/bin/expect spawn ssh @IPADDRESS expect "username:" # Send the username, and then wait for a password prompt. send "@username\r" expect "password:" # Send the password, and then wait for a shell prompt. send "@password\r" expect "#" # Send the prebuilt command, and then wait for another shell prompt. send "system config immediate\r" #Send space to pass the pause expect -re "^ *--More--\[^\n\r]*" send "" expect -re "^ *--More--\[^\n\r]*" send "" expect -re "^ *--More--\[^\n\r]*" send "" # Capture the results of the command into a variable. This can be displayed, or written to disk. sleep 10 expect -re .* set results $expect_out(buffer) # Copy buffer in a file set config [open linkproof.txt w] puts $config $results close $config # Exit the session. expect "#" send "logout\r" expect eof The content of the output file: The authenticity of host '@IP (XXX.XXX.XXX.XXX)' can't be established. RSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX. Are you sure you want to continue connecting (yes/no)? @username Please type 'yes' or 'no': @password Please type 'yes' or 'no': system config immediate Please type 'yes' or 'no': Like you can see, the result of the command is not in the file. Could you, please, help me to understantd why ? Thanks for your help. Romuald

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Does using ReadDirectoryChangesW require administrator rights?

    - by Alex Jenter
    The MSDN says that using ReadDirectoryChangesW implies the calling process having the Backup and Restore priviliges. Does this mean that only process launched under administrator account will work correctly? I've tried the following code, it fails to enable the required privileges when running as a restricted user. void enablePrivileges() { enablePrivilege(SE_BACKUP_NAME); enablePrivilege(SE_RESTORE_NAME); } void enablePrivilege(LPCTSTR name) { HANDLE hToken; DWORD status; if (::OpenProcessToken(::GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { TOKEN_PRIVILEGES tp = { 1 }; if( ::LookupPrivilegeValue(NULL, name, &tp.Privileges[0].Luid) ) { tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; BOOL result = ::AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL); verify (result != FALSE); status = ::GetLastError(); } ::CloseHandle(hToken); } } Am I doing something wrong? Is there any workaround for using ReadDirectoryChangesW from a non-administrator user account? It seems that the .NET's FileSystemWatcher can do this. Thanks!

    Read the article

  • How can I use PHP Mail() function within PHP-FPM? On Nginx?

    - by TheBlackBenzKid
    I have searched everywhere for this and I really want to resolve this. In the past I just end up using an SMTP service like SendGrid for PHP and a mailing plugin like SwiftMailer. However I want to use PHP. Basically my setup (I am new to server setup, and this is my personal setup following a tutorial) Nginx Rackspace Cloud PHP 5.3 PHP-FPM Ubuntu 11.04 My phpinfo() returns this about the Mail entries: mail.log no value mail.add_x_header On mail.force_extra_parameters no value sendmail_from no value sendmail_path /usr/sbin/sendmail -t -i SMTP localhost smtp_port 25 Can someone help me to as why Mail() will not work - my script is working on all other sites, it is a normal mail command. Do I need to setup logs or enable some PHP port on the server? My Sample script <? # FORMS VARS // $to = $customers_email; // $to = $customers_email; $to = $_GET["customerEmailFromForm"]; $subject = "Thank you for contacting Real-Domain.com"; $message = " <html> <head> </head> <body> Thanks, your message was sent and our team will be in touch shortly. <img src='http://cdn.com/emails/thank_you.jpg' /> </body> </html> "; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; // SEND MAIL mail($to,$subject,$message,$headers); ?> Thanks

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Website often sticks, but clicking the link again everything loads fine

    - by Dave
    Hi I have a website, which normally is very fast, however in the last week or two we're run into a problem whereby randomly if you click a link the browser will just sit there with the throbber spinning but the page doesn't appear to load If you click the link again it then responds straight away This doesn't seem to be limited to Chrome, Firefox or IE as we've tried all with all having same problems The site is build in ASP and connects to a MySQL database and runs on a dedicated windows 2003 server The firewalls were changed in the data centre recently to pixies but I've not been able to reproduce the problem outside of the office Within the office, we have 11 people (only 7 today and still experiencing problems) connected to a 7MB ADSL connection with Eclipse I have made some changes to the network in the office, namely wiring 4 desks back to the main 24 port switch rather than to small 5 port which then linked on to the 24 port switch... however we were having problems before this was done, and I did this to try and rule out an issue with the switch We have a backup ADSL connection, so I may try switching to that, or switching to a different router, but do you think this is a likely issue? Failing that, what else would you check? Thanks, sorry for length! Dave

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Converting FoxPro Date type to SQL Server 2005 DateTime using SSIS

    - by Avrom
    Hi, When using SSIS in SQL Server 2005 to convert a FoxPro database to a SQL Server database, if the given FoxPro database has a date type, SSIS assumes it is an integer type. The only way to convert it to a dateTime type is to manually select this type. However, that is not practical to do for over 100 tables. Thus, I have been using a workaround in which I use DTS on SQL Server 2000 which converts it to a smallDateTime, then make a backup, then a restore into SQL Server 2005. This workaround is starting to be a little annoying. So, my question is: Is there anyway to setup SSIS so that whenever it encounters a date type to automatically assume it should be converted to a dateTime in SQL Server and apply that rule across the board? Update To be specific, if I use the import/export wizard in SSIS, I get the following error: Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider. Followed by a list of a given table's date columns. If I manually set each one to a dateTime, it imports fine. But I do not wish to do this for a hundred tables.

    Read the article

  • Trying to attach a file from SD Card to email

    - by Chrispix
    I am trying to launch an Intent to send an email. All of that works, but when I try to actually send the email a couple 'weird' things happen. here is code Intent sendIntent = new Intent(Intent.ACTION_SEND); sendIntent.setType("image/jpeg"); sendIntent.putExtra(Intent.EXTRA_SUBJECT, "Photo"); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://sdcard/dcim/Camera/filename.jpg")); sendIntent.putExtra(Intent.EXTRA_TEXT, "Enjoy the photo"); startActivity(Intent.createChooser(sendIntent, "Email:")); So if I launch using the Gmail menu context It shows the attachment, lets me type who the email is to, and edit the body & subject. No big deal. I hit send, and it sends. The only thing is the attachment does NOT get sent. So. I figured, why not try it w/ the Email menu context (for my backup email account on my phone). It shows the attachment, but no text at all in the body or subject. When I send it, the attachment sends correctly. That would lead me to believe something is quite wrong. Do I need a new permission in the Manifest launch an intent to send email w/ attachment? What am I doing wrong?

    Read the article

  • Can I recover lost commits in a SVN repository using a local tracking git-svn branch?

    - by Ian Stevens
    A SVN repo I use git-svn to track was recently corrupted and a backup was recovered. However, a week's worth of commits were lost in the recovery. Is it possible to recover those lost commits using git-svn dcommit on my local git repo? Is it sufficient to run git-svn dcommit with the SHA1 of the last recovered commit in SVN? eg. > svn info http://tracked-svn/trunk | sed -n "s/Revision: //p" 252 > git log --grep="git-svn-id:.*@252" --format=oneline | cut -f1 -d" " 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a > git svn dcommit 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Or will the git-svn-id need to be stripped from the intended commits? I tried this using --dry-run but couldn't tell whether it would try to submit all commits: > git svn dcommit --verbose --dry-run 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Committing to http://tracked-svn/trunk ... dcommitted on a detached HEAD because you gave a revision argument. The rewritten commit is: 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Thanks for your help.

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • How can I do rapid application development with ASP.NET MVC?

    - by Erik Forbes
    I've been given a short amount of time (~80 hours to start with) to replace an existing Access database with a full-blown SQL + Web system, and I'm enumerating my options. I would like to use ASP.NET MVC, but I'm unsure of how to use it effectively with my short timetable. For the database backend I'll be using Linq to SQL as it's a product I already know and can get something working with it quickly. Does anyone have any experience with using ASP.NET MVC in this way and can share some insight? Edit: The reason I've been interested in ASP.NET MVC is because I know (100% confirmed) that there will be more work to do after this first round, and I'd like my maintenance work to be as easy as possible. In my experience Webforms applications tend to break down over repeated maintenance, despite discipline. Maybe there's a middle ground? How difficult would it to be for me to, say, build the app with Webforms, then migrate it to MVC later when I have more time budgeted to the project? Edit 2: Further background: the Access application I'm replacing is used in some capacity by everyone in the building, and since it was upgraded from Access 98 to 2003 it's been crashing daily, causing hours of lost productivity as people have to re-enter data since the last backup. This is the reason for the short amount of time - this is a critical business function, and they can't afford to keep re-entering data on a daily basis.

    Read the article

  • Ruby Fileutils.cp_r Permission Denied when :preserve => true

    - by slawley
    Hello, I am trying to implement a poor-man's backup/mirroring script and am having some trouble. I am on Windows-XP, using Ruby's FileUtils module to recursively copy files. So long as I don't set the :preserve flag to true, everything works fine. Works: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true) Doesn't work: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true, :preserve => true) I have full permissions on the Dest_dir as it's on the desktop of my local machine and I just created it. I can copy and delete files and folders, but apparently changing, or maintaining the file attributes with :preserve isn't working. I haven't had a chance to try this on a Mac or linux box, but from reading around online the :preserve flag is a normal stumbling block to come up against in a Windows environment. In a similar line of questioning, what is the default behavior for FileUtils.cp_r when it encounters an existing file at the destination directory? Simply overwrite and replace everything in Destination with whatever is in Source, or can I skip a file with conflicts and just log it for resolution later? (If this should be a separate question, just let me know and I'll make it one.) Thanks, Spencer

    Read the article

  • Tomcat reporting 404 error on all of newly deployed WAR files?

    - by dacracot
    I deployed a WAR file into $TOMCAT_HOME/webapps by copying the file into the directory, just like I've done a thousand times before. Tomcat detects the WAR and inflates it. I can traverse the directory tree on my server at the command line (it's Fedora). But when I address the webapp within my client machine's browser, I get nothing but 404 errors. This has happened to the last two deployments of completely separate WARs. The first was a replacement of an existing WAR. I first deleted the WAR and its inflated directory, and then copied in the WAR which inflated... 404. I deleted everything again, put back the previously working WAR from backup. It inflated and worked. The second was a completely new, never before deployed WAR... nothing but 404. Other WARs are working, but now I'm afraid to change anything until I know what is going on. Any clues? Edit: From my comment you can see that the logs included "SEVERE: Error listenerStart" after the WAR was deployed by Tomcat. There were no stack traces or other errors reported.

    Read the article

  • Old desktop programmer wants to create S+S project

    - by Craig
    I have an idea for a product that I want to be web-based. But because I live in Brasil, the internet is not always available so there needs to be a desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Payment Processors - What do I need to know if I want to accept credit cards on my website?

    - by Michael Pryor
    This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK).

    Read the article

  • Questions related to mantis:

    - by Komal Matkar
    We are interested in installing Mantis but we have some doubts Please clarify as early as possible if you can so that we can go for further process. 1) We have one team at USA (Client’s place) and one is at India. In which server we should install the Mantis. If we are installing at USA will it run slow in India? 2) What about technical Support. You may take technical support with payment. But how much support will be given free (As we have to discuss this with client). 3) As we have seen details in your website, you have given it supports oracle and sql database. But we wanted to know till which lowest version of oracle and sql it supports. Please send us minimum requirement. 4) What is the capacity of the database to store defects? Backup facility is available? If yes please tell us how should we take. Because we have big team and 5-6 applications so it should not give further problems. 5) Database support: Do you provide database support or database while installation? While installing all the prerequisite application will get installed or we need any application separately. 6)How many users can access at a time? Will it work slowly if more users are working at a time? Thanks Komal

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >