Search Results

Search found 653 results on 27 pages for 'overkill'.

Page 19/27 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Simple Linux program that takes any HTTP/HTTPS request and returns a single page?

    - by ultrasawblade
    I have a Linux box operating as router. There's a NIC that's connected to the internet (WAN), a NIC connected to an 8-port GbE switch (LAN), and a NIC connected to a Linksys wireless N-router (WLAN). Routing between everything is working perfectly. I have security completely disabled on the wireless router, but the WLAN NIC is firewalled such that it will only accept DNS queries and PPTP VPN connections. Currently HTTP/HTTPS traffic and everything else is blocked. I would like to run something that listens on port 80/443 of the WLAN NIC, and, for non VPN'ed connections, given any HTTP/HTTPS request it will return a single webpage saying "Unauthenticated" and explain how to sign into the VPN. A transparent proxy seems to be what I need, but my searches all seem to direct me to Squid, which is already running on my server and seems overkill for this simple task. Is there a simpler, lightweight program out there that does just this or should I just suck it up and run two instances of Squid (or figure out how to configure it)? Or, is this entire VPN thing I'm doing complete nonsense and I should just enable encryption on the wireless router?

    Read the article

  • Are These Parts compatible?

    - by ell
    I have never assembled a PC before, although I have taken an old one apart and replaced a few parts in others here and there so I have (very) limited experience. I have been looking to make a pc and here are the parts I might buy: Foxconn P45AL Intel P45 (Socket 775) DDR2 Motherboard (with onboard sound I believe) Gigabyte GeForce GTX 460 OC 768MB GDDR5 PCI-Express Graphics Card Already have 2 1gb sticks of dual channel DDR2 memory Intel Core 2 Quad Q8400 LGA775 'Yorkfield' 2.66GHz 4MB-cache Processor Samsung SpinPoint F3 1TB SATA-II 32MB Cache Hard Drive Antec Dark Fleet Series DF10 Gaming Enclosure – Black I already have monitor, mouse, keyboard and DVD/CD drive Akasa Freedom Power 1000W Modular Power Supply I have never done this before so feel free to laugh at me for getting something obvious wrong, forgetting a vital component etc. but is all of this compatible? And have I gone overkill on the PSU, if so, please recommend one. Thanks in advance, ell. EDIT: Added PSU which I forgot to mention EDIT: I would be using this to surf the internet, write e-mails, chat, word process, play games such as team fortress 2 & spring rts (at highest graphics hopefully), some 3d modelling in blender, some opengl programming, and image editing in GIMP.

    Read the article

  • System State Backup Retention Policies

    - by isoscelestriangle
    I was wondering if there was a general consensus on how long to keep system state backups. I am trying to reevaluate our current backup process, and trying to get a good handle on our current storage requirements. Our current setup involves tapes and sending backups offsite with Barracuda Networks. We have been doing our system state backups with Barracuda now, which does full backups daily, leaving our storage requirements growing quite quickly. My boss is a little too gung-ho with backups and wants our system states saved for quite a while. We currently have 5 days of nightlies, 5 weeklies, 3 monthlies, and so on. I think this is quite overkill for system state backups. My boss wants the ability to go back in time to find when an issue appeared, but I don't think that is practical. Many things change in the course of several months. I also think it would be hard not to notice problems with our DCs and other servers for several months. I would think that a previous week's snapshot and the current week's dailies would suffice. Any advice or reading you can point me to? Thanks!

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • How to create a static IP on Windows Server 2008 R2 so I can access the server remotely

    - by Aesir
    I have just purchased a HP Proliant N40L which I am intending to use as a NAS, learning tool and just in general something to mess around with. As a student via the Microsoft dreamspark program I can get a free copy of Windows Server 2008 R2 which I am using as the OS. So that I can remote to the box from outside of my local network and so that I can stream media from it to my PS3, I have read that I need to create a static IP for the server and use port forwarding to forward to this IP so I can remote in. Is this correct? I am not really sure how to do this and if I need to make these changes on my router configuration, on the OS or both. I am a novice when it comes to networking however most resources for Windows server 2008 R2 seem to assume a fair amount of experience already. I realise that using this particular OS may seem like overkill for what I currently wish to do with it (stream content to other devices and backup) but as I can get a copy for free it seems sensible. Edit: From reading answers posted I feel I should give more information. I have now tried to add a static IP address using my router configuration settings. I have used the getmac command to get the mac address of the server. My ISP is Virgin Media and I have gone to the LAN IP section and I have added an IP address to the DHCP Reservation Lease Info. I can now use remote desktop connection internally to remote to the server (so I am assuming assigning this IP has worked). How do I configure this on the OS as well? I am also unsure on how I would remote to this machine outside of my local network?

    Read the article

  • Is there any way to tweak / rotate mouse orientation? Any applications? Registry edits?

    - by calbar
    I've got a very frustrating issue with my new Logitech Marathon Mouse M705. It is absolutely perfect for what I need, with the exception that it tracks on an angle for some reason. What I mean is when you slide the cursor to the left, it trends upward - when you slide to the right, it trends down. Moving the cursor along a flat horizontal line is no longer a natural motion - you need to fight what I suspect is a mechanical error of some kind. Unfortunately, I've already exchanged this mouse once and tested both on different Windows 7 and Mac OS X machines - the problem continues to occur. Is there a software solution for me? I'm incredibly surprised there is no simple way to adjust the orientation... how can every mouse manufacturer possibly adjust their hardware to track to everyone's tastes? What about those who need to flip orientation a full 90 or 180 degrees? I only need to adjust mine a few degrees, but I'm sure that need has arisen as well. Anyway, I'm running the latest SetPoint drivers (6.00) on Windows 7 and there are no orientation options available. I've checked out uberOptions and the M705 isn't supported yet (with the last version update over 6 months ago). MAF Mouse instructions are a very strange series of mouse clicks to activate. This app also seems a little overkill and costs $$ (which I'm willing to pay as a last resort). Is there no universal registry value for mouse orientation? How about for SetPoint drivers specifically? I've done a simple search in regedit without any luck. An XML file somewhere? Anything?

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • Lost Windows 7 boot after EasyBCD with EFI

    - by drent
    I've got a Lenovo Y580 with a 64GB SSD and a 1TB HDD setup using GPT and setup to boot from (U?)EFI. I was trying to get my Linux Mint installation on the Windows boot manager using EasyBCD (I didn't realise EFI but it wiped my boot partition/loader and I cannot seem to get Windows back (and I still can't get a bootable Linux Mint). Using the System Recovery utility, Startup Repair can't "see" windows (it might be because I'm using a 7 Pro disk to recover Home Premium?). In command prompt, Bootrec tools don't do anything and bootsect can't run because it says that it's for BIOS only and I've booted with EFI. I can see the EFI data on the 200mb SSD partition using diskpart but I don't know how to add Windows back onto whatever bootloader I have/need. At the moment the only options I can see are: Do a fresh install of Windows and hope that the setup remains as fast as the default one (the SSD is some kind of cache for Windows but I can't quite see how it works given that the rest of the SSD is unpartitioned space). This seems like overkill given that Windows was working fine til EasyBCD deleted it. Try forcing BIOS mode and see if that somehow magically fixes things Try converting from GPT to MBR to try and use the bootrec/bootsect tools (and maybe back again) which seems like a really bad idea. Anyone have any ideas?

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • kanban scrumish tool(s) to get started

    - by Davide
    After investigating a little bit scrum and kanban, I finally read this answer and decided to start using kanban, picking something from scrum (note that I'm working mostly by myself, and I do have read this question and its answers). Now, my question is: which tool would be best to get started? whiteboard and postit agilezen.com JIRA with greenhopper a spreadsheet (possibly on Google Docs) brightgreenprojects.com Agilo Target Process something else (please specify) Notes about each: I would lean towards the whiteboard, but there are several drawbacks (e.g. cannot make automatic charts, time measurements, metrics, and sometimes I work from home - where I need it most - and it's not convenient to carry :-) I don't want to remember another username/password (I promised to myself to signup only to OpenID-enabled services) My employer has JIRA but my group doesn't use it - I might ask for an account (it shouldn't require another password) and maybe later involve the rest of the group. But I don't know if they are using greenhopper and if it's a big deal installing it. I generally hate spreadsheets maybe overkill? I'd be happy to have a localhost instance, but it could be problematic to give access to the whole group (per network/firewalls) - not a deal-breaker but surely a concern What I'd like to get from this? being more productive tracking how much time I spend in any given task, possibly discussing the issue with my supervisor tracking what "blocks" me most often immediately see where I am compared to my schedule manage in a better way my long todo list (e.g. answering faster to the "what I should do next?" question) Do you have any suggestion? Note on the scrumish tag: read the Henrik Kniberg's PDF. He first introduced the definition of scrumish on page 9.

    Read the article

  • Ruby - Feedzirra and updates

    - by mplacona
    Hi, trying to get my head around Feedzirra here. I have it all setup and everything, and can even get results and updates, but something odd is going on. I came up with the following code: def initialize(feed_url) @feed_url = feed_url @rssObject = Feedzirra::Feed.fetch_and_parse(@feed_url) end def update_from_feed_continuously() @rssObject = Feedzirra::Feed.update(@rssObject) if @rssObject.updated? puts @rssObject.new_entries.count else puts "nil" end end Right, what I'm doing above, is starting with the big feed, and then only getting updates. I'm sure I must be doing something stupid, as even though I'm able to get the updates, and store them on the same instance variable, after the first time, I'm never able to get those again. Obviously this happens because I'm overwriting my instance variable with only updates, and lose the full feed object. I then thought about changing my code to this: def update_from_feed_continuously() feed = Feedzirra::Feed.update(@rssObject) if feed.updated? puts feed.new_entries.count else puts "nil" end end Well, I'm not overwriting anything and that should be the way to go right? WRONG, this means I'm doomed to always try to get updates to the same static feed object, as although I get the updates on a variable, I'm never actually updating my "static feed object", and newly added items will be appended to my "feed.new_entries" as they in theory are new. I'm sure I;m missing a step here, but I'd really appreciate if someone could shed me a light on it. I've been going through this code for hours, and can't get to grips with it. Obviously it should work fine, if I did something like: if feed.updated? puts feed.new_entries.count @rssObject = initialize(@feed_url) else Because that would reinitialize my instance variable with a brand new feed object, and the updates would come again. But that also means that any new update added on that exact moment would be lost, as well as massive overkill, as I'd have to load the thing again. Thanks in advance!

    Read the article

  • Is LaTeX worth learning today?

    - by Ender
    I know that LaTeX is big in the world of academia, and was probably a big name in desktop publishing before the glory days of WordPerfect and Microsoft Office but as a Windows user that is interested in the power of LaTeX and the general smoothness of a LaTeX generated page is it really worth learning? In a couple of months I'll be starting my final year in Computer Science and LaTeX has been bounced around the campus by many of the Linux geeks. In reality, is there any need to use it today? What will I actually gain from it and will I enjoy using it? Finally, how does one use LaTeX on a Windows machine? What software do I really need? I've read a couple of guides but many of them seem like overkill. Please help break a LaTeX newbie into the world of professional academic publishing! EDIT: I've toyed with LaTeX for a while, and have even learned that it's pronounced "lay-tech", not "lay-tecks". I'll agree once again with the accepted answer in saying that MiKTeX is the best solution for Windows users.

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

  • Automatic linebreak in WPF label

    - by Vlad
    Dear all, is it possible for a WPF Label to split itself automatically into several lines? In my following example, the text is cropped at the right. <Window x:Class="..." xmlns="..." xmlns:x="..." Height="300" Width="300"> <Grid> <Label> `_Twas brillig, and the slithy toves did gyre and gimble in the wabe: all mimsy were the borogoves, and the mome raths outgrabe.</Label> </Grid> </Window> Am I doing something wrong? Taking other controls is unfortunately not a good option, since I need support of access keys. Replacing the Label with a TextBlock (having TextWrapping="Wrap"), and adjusting its control template to recognize access keys would be perhaps a solution, but isn't it an overkill? Edit: having a non-standard style for label will break skinning, so I would like to avoid it if possible.

    Read the article

  • Extract known pattern substring from NSString (without regex)

    - by d11wtq
    I'm really tempted to drop RegexKit (or my own libpcre wrapper) into my project in order to do this, but before I do that I want to know how Cocoa developers manage to do half of this basic stuff without really convoluted code or without linking with RegexKit or another regular expression library. I find it gobsmacking that Cocoa does not include any regular expression matching features. I've so accustomed to using regular expressions for all kinds of things that I'm lost without them. I can do what I need without them, but the code would be rather convoluted. So, Cocoa devs, I ask you, what's the "Cocoa way" to do this... The problem is an everyday problem in programming as far as I'm concerned. Cocoa must have ways of doing this with the built-in features. Note that the position of the elements I want to match changes, and sometimes "quotes" are present. Whitespace is variable. Take the following strings: Content-Type: application/xml; charset=utf-8 Content-Type: text/html; charset="iso-8859-1" Content-Type: text/plain; charset=us-ascii Content-Type: text/plain; name="example.txt"; charset=utf-8 From all of these strings, how would you go about determining the mime type (e.g. text/plain) and the charset (e.g. utf-8) using just the built-in Cocoa classes? I'd end up performing a series of -rangeOfString: and substring calls, with conditional checks to deal with the optional quotes etc. Is there a way to do this with NSScanner? The NSScanner class seems to have a pretty naive API to me. Something like C's sscanf() that works for NSString objects would be an ideal fit. Most of my string parsing needs are simple such as this example so maybe regular expressions, while I'm accustomed to them, are overkill?

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • Run Sinatra via a Rake Task to Generate Static Files?

    - by viatropos
    I'm not sure I can put this correctly but I'll give it a shot. I want to use Sinatra to generate static html files once I am ready to deploy an application, so the resulting final website would be pure static HTML. During development, however, I want everything to be dynamic so I can use Haml and straight Ruby code to make things fast/dry/clear. I don't want to use Jekyll or some of the other static site generators out there because they don't have as much power as Sinatra. So all I basically need to be able to do is run a rake task such as: rake sinatra:generate_static_files. That would run the render commands for Haml and everything else, and the result would be written to files. My question is, how do I do that with Sinatra in a Rake task? Can I do it in a Rake task? The problem is, I don't know how to include the Sinatra::Application in the rake task... The only other way I could think of doing it is using net/http to access a URL that does all of that, but that seems like overkill. Any ideas how on to solve this?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >