Search Results

Search found 5423 results on 217 pages for 'care industry'.

Page 162/217 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • RedStation.com is heaven for ddos attackers, How to file complaint?

    - by Ehsan
    Sorry, I don't know where to open this subject. This is not the first time we have faced with a massive DDOS attack from one of servers in RedStation.com and even after we had contacted with their abuse department with it's log there is no cooperation and they don't even like to bother themselves about it. and we don't know how to stop such activity. Do you know how to file complaint against this datacenter? we could not be patient anymore and see they are not care about such things on their network ? it seems like they are heaven for attackers now since they close their eyes to gain more money. I guess some global organization is missing in this matter to investigate such activity and make sure providers are responsible for their services. Here is some of it's log: 2686M 75G DROP all -- * * 31.3-RedStation 0.0.0.0/0 rt: 16167 0.002007 31.3-RedStation -> my-server-ip UDP Source port: 36391 Destination port: 16167 0.002011 31.3-RedStation -> my-server-ip UDP Source port: 38367 Destination port: 16312 0.002014 31.3-RedStation -> my-server-ip UDP Source port: 39585 Destination port: 12081 0.002018 31.3-RedStation -> my-server-ip UDP Source port: 39585 Destination port: 12081 0.002021 31.3-RedStation -> my-server-ip UDP Source port: 38367 Destination port: 16312 0.002025 31.3-RedStation -> my-server-ip UDP Source port: 39585 Destination port: 12081 0.002033 31.3-RedStation -> my-server-ip UDP Source port: 36391 Destination port: 16167 0.002037 31.3-RedStation -> my-server-ip UDP Source port: 38367 Destination port: 16312 0.002040 31.3-RedStation -> my-server-ip UDP Source port: 38367 Destination port: 16312 0.002044 31.3-RedStation -> my-server-ip UDP Source port: 38367 Destination port: 16312 0.002047 31.3-RedStation -> my-server-ip UDP Source port: 39585 Destination Any response would be appreciated

    Read the article

  • Web Server slows down (ASP.NET)

    - by mfeingold
    below is a question I posted on stackoverflow . as suggested by Martin Clarke I also post it here. We have a really strange problem. One of the servers in the server farm becomes really slow. We see a number of timeouts in the logs and overall response time is not where it should be (and is on other servers in the farm). What is also strange is that it is not just the web app - Just logging into the server takes up to 1.5 min to show you the desktop. Once you are in, the system is as responsive as ever - unless you try to launch something, i.e. notepad - it takes another minute to launch and after launch it works fine. I checked a number of things - memory utilization is reasonable, CPU is below 15%, windows handles, event logs do not show anything. Recycling the aps.net process does not fix it - it still takes over a minute to log in. Rebooting the server helped, but now it started to slow down again. After a closer look we found out that Windows Temp directory is full of temp files - over 65k files. This is certainly something to take care of. But my question is could it be the root cause of the sluggishness, or there is still something else lurking in the shadows? Edit After more digging I am zeroing in on the issue related to the size of temp directories. This article: (see the original post this thing will not let me include a second link) describes something very similar. It still does not answer the question why the server is still slow even there is no activity.

    Read the article

  • Windows Vista Context Menu>New... does not find entries

    - by Paul
    I was trying to remove a virus and foolishly did not backup registry keys I deleted because I (thought) I only deleted entries from the folders of programs I did not care about. However, I think I have done something wrong here: Now when I open a context menu (right click) in any location and hover over the "New..." option I don't get any options. It has a greyed out box saying "(Empty)". So far I have found out the the entries themselves are still there (using the locations provided here: Windows 7 - Add an item to 'new' context menu). I have also used a program recommended in that thread which also finds the entries intact and enabled. So it seems maybe I have deleted the entry which tells Vista where to look to find the files that can be created. How can I restore this so entries are shown again? I know system restore is an option but as I have said I did this when removing a (very stubborn) virus so that is the last resort.

    Read the article

  • What's wrong with closing applications on Windows Mobile?

    - by balpha
    As far as I can tell, this annoys the crap out of people that do notice and (at max) gives no real benefit to people who don't notice: Why did Microsoft decide to make the "X" on Windows Mobile (or CE before that) not close, but only hide the application, and thus keep cluttering up your memory? WM wants you to go to the Control Panel - Memory and "Do you really want to" shut down the app. Pretty much every WM application I've seen that did not come from Microsoft has a "Quit" menu choice. The number of task managers out there that let you quit programs is larger than the count of emails from African bank managers that want me to take care of some millions of bucks that belonged to a deceased customer of theirs. My new HTC even comes with a close-able (not closeable, though) task manager pre-installed. But still today, Word Mobile just wants to hide, not be closed. I don't want to get a "That's M$hit, get used to it" answer; I really want to know: What in the world is the reason for this decision, and even more, for still sticking with it?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Need some help figuring out BB4Win or any of its varieties: does it replace windows explorer?

    - by StormRyder
    I don't really care too much about replacing original Windows the desktop, taskbar and system tray. I really dislike the new Win7/Vista Windows Explorer: Screenshot here (just to make sure there's no confusion) So, what I'm trying to figure out is whether BB4Win or its plugins have a replacement for it. The bb4win website pretty much doesn't make sense to me. I'm using Win7 64 bit. Also, off-topic ... I was looking at a related question on this site here: Good Windows shell replacement Why can't I add comments? I wanted to ask the same question I'm asking now as a comment under the answer given by Molly7244, the person who answered with suggesting BB4Win. But there seems to be no way to add new comments anywhere! Why? Am I missing something? I don't quite understand the weird way this site is set up... I'm used to the more typical forum format. Sorry. Thanks for any help!

    Read the article

  • Customizing tmux status to represent current working directory and files

    - by user69397
    I've been playing with this for a couple of days, so I'm sure I'm missing something simple. Love tmux. Using it for development and have so many windows I need a better way of distinguishing them in the status bar and in the buffer list. Seeing a list of "bash" and "vim" isn't really helpful at all. And since they're all on the same host - don't care about the hostname right now. I'd like to show the current working directory, and the file being worked on. For example when I view the list of buffers I currently see: (0) 0: vim [100x44] (1 panes) "murph" (1) 1: vim [100x44] (1 panes) "murph" (2) 2: bash- [100x44] (1 panes) "murph" (3) 3: bash* [100x44] (1 panes) "murph" Here's what I'd like to see 0:vim main.py ~/devl/project1 1:vim index.html ~/devl/samples/staticfiles 2:bash ~/devl/sandbox 3:bash ~/.vimrc I'd like to see similar info in the status bar for each individual window. While I am able to get PWD to show up in the status bar of a window, it's only the working directory from where tmux was launched. This isn't any help as I change directories. I'm hoping this can be done without a bunch of scripts. Thanks all.

    Read the article

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • how to throttle http requests on a linux machine?

    - by hooraygradschool
    EDIT: here is the summery: i need to reduce max connections preferably system wide on Ubuntu 11.04 but at least within Google Chrome. i do not need or want to throttle bandwidth, Verizon seems to only care about the number of connections so that is all i want to change. also, i don't want to use firefox unless i have to, i have three other machines all using chrome and synced and i just prefer it over firefox. i use tethering for my home internet connection via my verizon cell phone. without paying for it. this works just fine for streaming netflix via my nintendo wii and pretty much every other conceivable use ive had for it. except, during heavy usage with multiple tabs open on my laptop, the network connection on my phone will just turn off, then on again, then off, but it never fully connects. i think, based on this and other questions that this is caused by verizon getting too many http requests from my phone. is there some software, script, setting or otherwise that would allow me to throttle my requests to say, 5 or 10 or whatever it turns out is 1 less than verizon is looking for, so that my cell's network connection is not lost? i would far prefer a slow down rather than complete shut off of my internet connection. i am almost certain is from quantity of requests and not related to data, because, as i mentioned, netflix will run all day without a hitch, and that uses more data than anything else i would be doing. if i had a router i am pretty sure there are settings i could easily change to only allow so many requests at a time ... but in this case, my phone is my router, so no settings. im using ubuntu 11.04 on my netbook with an htc incredible on verizon (not that the phone details are relevant) i have been trying to figure this out for quite some time, currently the only fix is ensure that all requests are stopped and then sometimes it works again, other times i have to manually turn my 3g service off and then back on. thank you so much for any assistance!

    Read the article

  • Which Windows 8 tool should I use to "read", "upload", my Windows 7 latest backup DVD (is it possible?)

    - by Robert
    Which Windows 8 tool should I use to "read", "upload", my Windows 7 latest backup DVD (is it possible?). I've just installed W8 and haven't made any changes to my new ecosystem and, what happened was that, as I was managing my new drivers, some mess* occurred, I confess, and now what I have left is every single backup tool I made use of W7, like system images, restore dvds, backup up to date monthly and so on, and would like to keep in touch with W8. I'm one of those with problems managing the amd switchable gpu drivers. Now I want to stay with W8 (download version - didn't clean install) but with my old personal files. I don't care to programs updates. I got everything original on dvds, of my interest. Yesterday I tried refreshing W8 once but didn't work. Maybe trying again tonight. What would you guys do in my place, please? *the mess I am talking about is to have disabled my intel (the only driver left) gpu in device manager tool in W8. I got black screen on system boot. Cheers, C.C.

    Read the article

  • Linux Server partitioning

    - by user1717735
    There's a lot of infos about this out there, but there's also a lot of contradictory infos… That's why i need some advices about it. So far, on the servers i had home for test (or even "home production") purposes i didn't really care about partitioning and i configured all in / + a swap partition, over RAID 0. Nevertheless, this pattern can't apply to production servers. I have found a good starting point here, but also it depends on what the servers will be used for… So basically, i have a server on which there will be apache, php, mysql. It will have to handle file uploads (up to 2GB) and has 2*2TB hard drive. I plan to set : / 100GB, /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB All of this if of course over RAID 1. But actually, it's not a big deal if I lose data being uploaded, so would it be interesting mounting /tmp over raid 0 while maintaining the rest over raid 1? Sounds complicated…

    Read the article

  • Using oauth2_access_token to get connections in linkedIn

    - by Pedro
    I'm trying to get the connections in linkedIn using their API, but when I try to retrieve the connections I get a 401 unauthorized error. in the official documentation says You must use an access token to make an authenticated call on behalf of a user Make the API calls You can now use this access_token to make API calls on behalf of this user by appending "oauth2_access_token=access_token" at the end of the API call that you wish to make. The API call that I'm trying to do is the following Error -- http://api.linkedin.com/v1/people/~/connections:(id,headline,first-name,last-name)?format=json&oauth2_access_token=access_token I have tried to do it with the following endpoint without any problems. OK -- https://api.linkedin.com/v1/people/~:(id,first-name,last-name,formatted-name,date-of-birth,industry,email-address,location,headline,picture-urls::(original))?format=json&oauth2_access_token=access_token this list of endpoints for the connections API are described here http://developer.linkedin.com/documents/connections-api I just copied and pasted one endpoint from there, so the question is what's the problem with the endpoint for getting the connections? what am I missing? EDIT: for the preAuth Url I'm using https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=ConsumerKey&scope=r_fullprofile%20r_emailaddress%20r_network&state&state=NewGuid&redirect_uri=Encoded_Url https://www.linkedin.com/uas/oauth2/accessToken?grant_type=authorization_code&code=QueryString_Code&redirect_uri=EncodedCallback&client_id=ConsummerKey&client_secret=ConsumerSecret please find attached the login screen requesting the permissions EDIT2: Switched to https and worked like a charm!

    Read the article

  • sitemesh vs jsp-config (<include-prelude>)

    - by Nrj
    Please help clarifying : In web.xml I have the following <jsp-config> <jsp-property-group> <url-pattern>*.jsp</url-pattern> <el-ignored>false</el-ignored> <page-encoding>utf-8</page-encoding> <include-prelude>/jstlTaglibs.jspf</include-prelude> </jsp-property-group> </jsp-config> Also in decorators.xml I have <decorator name="footer" page="footer.jsp"> <pattern>*.action</pattern> </decorator> which is used via sitemesh.xml. The footer.jsp which says ... <decorator:body /> <@include .. "footer.jsp"/> So what I gather is, both of the codes above in a sense inject some jspf. Please help highlighting the differences and benefits of both the approaches. Also which one is more used across industry ?

    Read the article

  • People not respecting good practices at workplace

    - by VexXtreme
    Hi There are some major issues in my company regarding practices, procedures and methodologies. First of all, we're a small firm and there are only 3-4 developers, one of which is our boss who isn't really a programmer, he just chimes in now and then and tries to do code some simple things. The biggest problems are: Major cowboy coding and lack of methodologies. I've tried explaining to everyone the benefits of TDD and unit testing, but I only got weird looks as if I'm talking nonsense. Even the boss gave me the reaction along the lines of "why do we need that? it's just unnecessary overhead and a waste of time". Nobody uses design patterns. I have to tell people not to write business logic in code behind, I have to remind them not to hardcode concrete implementations and dependencies into classes and cetera. I often feel like a nazi because of this and people think I'm enforcing unnecessary policies and use of design patterns. The biggest problem of all is that people don't even respect common sense security policies. I've noticed that college students who work on tech support use our continuous integration and source control server as a dump to store their music, videos, series they download from torrents and so on. You can imagine the horror when I realized that most of the partition reserved for source control backups was used by entire seasons of TV series and movies. Our development server isn't even connected to an UPS and surge protection. It's just plugged straight into the wall outlet. I asked the boss to buy surge protection, but he said it's unnecessary. All in all, I like working here because the atmosphere is very relaxed, money is good and we're all like a family (so don't advise me to quit), but I simply don't know how to explain to people that they need to stick to some standards and good practices in IT industry and that they can't behave so irresponsibly. Thanks for the advice

    Read the article

  • Front End Developer v/s PHP-MySQL Engineer

    - by user301943
    Hello, I want to decide which of this would be a more viable career option? I am ready to quit my current job and hence I am looking for new opportunity. Current job is maintainence and no more active development. My current role is of a PHP/MySQL Developer. I very well understand web-programming and am comfortable with RoR/Sinatra/Zend MVC/JQuery/JSON manipulation, etc. I understand MySQL InnoDB/MyISAM engine and how one differs from the other, etc. Basically, I could very well manage the deployment of a web-application end-to-end including configuration of Apache/Nginx servers, memcache,etc On the other hand, I am being offered a Sr.Front End Web developer that would require me to extensively write HTML/CSS crossbrowser/crossplatform compliant code. I very well understand XHTML/CSS/Box model etc. I would be working on Drupal for the management of websites. While I understand continuing to work on server-side technologies would always be a good career path, how would the role of Core front-end developer turn out to be? If I take this opportunity, will I eventually get a chance to focus onto UCD, HCI, Information Architect,etc. So are these kinda roles possible if I focus on front end development? No offenses to the Front end developers, just want to understand if this is something I want to gain a mastery over. I have 2 yrs of industry experience after graduating with a MS-Computer Science. Although, I have a CS degree, if I were to take uip serious front-end role; I could probably go back and take up some design/HCI/UI courses. Please advise.

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • MySQL Full-Text Search Across Multiple Tables - Quick/Long Solution?

    - by Kerry
    Hello all, I have been doing a bit of research on full-text searches as we realized a series of LIKE statements are terrible. My first find was MySQL full-text searches. I tried to implement this and it worked on one table, failed when I was trying to join multiple tables, and so I consulted stackoverflow's articles (look at the end for a list of the ones I've been to) I didn't see anything that clearly answered my questions. I'm trying to get this done literally in an hour or two (quick solution) but I also want to do a better long term solution. Here is my query: SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = 96 AND b.`status` = 1 AND b.`active` = 1 AND MATCH( a.`name`, a.`sku`, a.`description`, d.`name` ) AGAINST ( 'ashley sofa' ) GROUP BY a.`product_id` ORDER BY b.`sequence` LIMIT 0, 9 The error I get is: Incorrect arguments to MATCH If I remove d.name from the MATCH statement it works. I have a full-text index on that column. I saw one of the articles say to use an OR MATCH for this table, but won't that lose the effectiveness of being able to rank them together or match them properly? Other places said to use UNIONs but I don't know how to do that properly. Any advice would be greatly appreciated. In the idea of a long term solution it seems that either Sphinx or Lucene is best. Now by no means and I a MySQL guru, and I heard that Lucene is a bit more complicated to setup, any recommendations or directions would be great. Articles: http://stackoverflow.com/questions/1117005/mysql-full-text-search-across-multiple-tables http://stackoverflow.com/questions/668371/mysql-fulltext-search-across-1-table http://stackoverflow.com/questions/2378366/mysql-how-to-make-multiple-table-fulltext-search http://stackoverflow.com/questions/737275/pros-cons-of-full-text-search-engine-lucene-sphinx-postgresql-full-text-searc http://stackoverflow.com/questions/1059253/searching-across-multiple-tables-best-practices

    Read the article

  • events not firing in VisualForce

    - by Ben
    In the page below,Topic__c is a single-select picklist. My intention is to have this list control which of the input fields is available below. The user selects an option, and the onchange event should fire, and rerender the fields. <apex:inputField value="{!Call_Report__c.Topic__c}" id="topic" > <apex:actionSupport event="onchange" reRender="tickerInput,sectorInput,bondInput"> <apex:param name="topicSelection" value="{!Call_Report__c.Topic__c}" /> </apex:actionSupport> </apex:inputField> <apex:inputField value="{!Call_Report__c.Tickers__c}" rendered="{!Call_Report__c.Topic__c='Issuer'}" id="tickerInput" /> <apex:inputField value="{!Call_Report__c.Sector__c}" rendered="{!Call_Report__c.Topic__c='Industry'}" id="sectorInput"/> <apex:inputField value="{!Call_Report__c.Security__c}" rendered="{!Call_Report__c.Topic__c='Specific Bond'}" id="bondInput" /> Am I doing something obviously wrong here? http://community.salesforce.com/t5/Visualforce-Development/Multi-select-picklist-not-firing-event-for-AJAX-refreshes/m-p/173572/highlight/false#M22119 seems to imply that what I am doing is reasonable...

    Read the article

  • CS Master's Degree Project vs. Thesis options

    - by Nwosh
    I'm doing a master's degree in computer science, and I'm currently at the point where I have to decide between the thesis and non-thesis options offered by my university. The thesis option was my first choice, it entails taking less courses but tends to take more time doing your thesis. The non-thesis option involves taking more coursework, taking a comprehensive exam, and doing a project in one semester with a faculty member. I'd like to pursue a PhD degree eventually (although not right away, I want to get some years of professional experience first), and I heard that having demonstrated the ability to work on a thesis helps a lot with admission (like: not doing thesis raises questions and suggests not being interested in research) and that the experience itself is very good. At the same time, almost everyone I know who did a thesis at my university took a long time (2-3 years), in theory it could be done in 1.5 years. I'm a part time student and I don't really want to spend so much time just getting a master's degree, I could still publish a few papers while working on the project option and I'd be done in a year or so, additionally, I heard having a master's degree with a project and more coursework is more desirable for the industry. So, when applying for a PhD degree in CS at some of the better universities, would the time spent working on the master's thesis help in getting me accepted? Or should I opt for the non-thesis option and hope that the extra coursework and publishing some papers would make up for not working on a thesis?

    Read the article

  • Accounting System for Winforms / SQL Server applications

    - by Craig L
    If you were going to write a vertical market C# / WinForms / SQL Server application and needed an accounting "engine" for it, what software package would you chose ? By vertical market, I mean the application is intended to solve a particular set of business problems, not be a generic accounting application. Thus the value add of the program is the 70% of non-accounting related functionality present in the finished product. The 30% of accounting functionality is merely to enable the basic accounting needs of the business. I said all that to lead up to this: The accounting engine needs to be a royalty-free runtime license and not super expensive. I've found a couple C#/SQL Server accounting apps that can be had with source code and a royalty free run time for $150k+ and that would be fine for greenfield development funded by a large bankroll, but for smaller apps, that sort of capital outlay isn't feasible. Something along the lines of $5k to $15k for a royalty-free runtime would be more reasonable. Open-source would be even better. By accounting engine, I mean something that takes care of at a minimum: General Ledger Invoices Statements Accounts Receivable Payments / Credits Basically, an accounting engine should be something that lets the developer concentrate on the value added (industry specific business best practices / processes) part of the solution and not have to worry about how to implement the low level details of a double entry accounting system. Ideally, the accounting engine would be something that is licensed on a royalty free run-time basis. Suggestions, please ?

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • How do people know so much about programming?

    - by Luciano
    I see people in this forums with a lot of points, so I assume they know about a lot of different programming stuff. When I was young I knew about basic (commodore) and the turbo pascal (pc). Then in college I learnt about C, memory management, x86 set, loop invariants, graphs, db query optimization, oop, functional, lambda calculus, prolog, concurrency, polymorphism, newton method, simplex, backtracking, dynamic programming, heuristics, np completeness, LR, LALR, neural networks, static & dynamic typing, turing, godel, and more in between. Then in industry I started with Java several years ago and learnt about it, and its variety of frameworks, and also design patterns, architecture patterns, web development, server development, mobile development, tdd, bdd, uml, use cases, bug trackers, process management, people management if you are a tech lead, profiling, security concerns, etc. I started to forget what I learnt in college... And then there is the stuff I don't know yet, like python, .net, perl, JVM stuff like groovy or scala.. Of course Google is a must for rapid documentation access to know if a problem has been solved already and how, and to keep informed about new stuff by blogs and places like this one. It's just too much or I just have a bad memory.. how do you guys manage it?

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >