Search Results

Search found 7747 results on 310 pages for 'dev karl'.

Page 253/310 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • How to be successful at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • Unable to boot into 11.10 in normal (get a black screen) OR recovery (get just a flashing cursor, no cli)

    - by user1092284
    Okay, so I had a dual boot of Tango Studio (based on Ubuntu 10.04) and Windows XP. Yesterday I downloaded the .iso for Ubuntu 11.10 and attempted to install from a USB (my BIOS won't normally boot from USB but I had PLOP boot manager on a CD). I booted up Ubuntu from the USB and then from there formatted the partition with Tango on and installed Ubuntu 11.10. On booting up I came into Grub rescue mode. So I booted up from the USB again and used boot-repair to reinstall Grub. After this I would see the normal Grub menu, but on choosing Ubuntu I would come to a black screen. On choosing recovery mode it would begin starting normally with no obvious errors but instead of coming to a cli I would just get a blank screen with a flashing cursor on the top left, not accepting any input. I have since reformatted and reinstalled from a CD rather than USB and had the exact same problem. I used boot-repair again and the result is the same. Output of the most recent boot-repair is at http://paste.ubuntu.com/869805/ I have also tried editing the ubuntu grub entry and replacing quiet with text nomodeset as I saw in an answer to another question. This got me a bit further - I saw the purple ubuntu loading screen but still came to a blank screen after that. Anyway, in most of the other questions in which that is brought up the user is still able to boot into recovery, while I am not. Can anyone help? Thanks in advance, let me know if there's any more info I need to provide! EDIT FOR MORE INFO: I read something saying it's quiet splash that should be replaced with nomodeset. Earlier I had left splash in the line. So i tried it this way and it froze after displaying the following text: fsck from util-linux 2.19.1 mountall: Plymouth command failed mountall: Disconnected from Plymouth /dev/sda5:clean, 139359/1741488 files, 745830/6961125 blocks From a bit of googling it doesn't look like plymouth is essential, but I've checked and I do have the most current versions of mountall and plymouth installed so I don't know why there's a problem EDIT FOR MORE INFO AGAIN: I used dkpg --reconfigure plymouth cause I saw it mentioned in another forum and it still says plymouth command failed on boot

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Rolling With the Punches

    - by D'Arcy Lussier
    So I’ve been tweeting the last little while “Rolling with the punches” and I’ve had some people ask me what that meant. Whether you’re running a conference (like I am this week), or a project, or a birthday party for a 2 year old, you need to be ready to handle those things that are unexpected. Risk mitigation can only go so far and its at those times that you need to become resourceful. So let me tell you what the last few days have been like. Today is the first day of Prairie Dev Con Winnipeg, a conference that I run. On Friday I was informed that my keynote speaker had lost his voice, one of my speakers had a family emergency and had to back out, and I got a warning from another that he was travelling over the weekend and if there was a storm or something he may not be able to get back by Monday for his talk. A storm didn’t happen, but their car did break down and he was delayed. Finally, Saturday night I took my printing order to Staples. It was at 5 and they closed at 6, and I had a bunch of surveys to be printed and cut. The girl working said that she’d have it ready by the next day (Sunday). Her intent was to come in the next morning and finish the job. Unfortunately, she had to be hospitalized that night and never made it into work…and never informed anyone of the remaining work. They found out at 3pm when I came to pick it up and there was no way they’d be able to cut everything in time. So how did we roll with these punches? - Miguel, my keynote speaker, was a trooper and was able to do the keynote but asked that his session get moved from Monday to Tuesday. This is why I wait until the last day before printing out schedules, they can change up to the event and even later. - I was able to move some sessions around to accommodate my stranded speaker and fill the empty slot from the speaker that couldn’t make it. - Staples was able to get me half the cut surveys so I took those and my wife will pick up the rest today. I altered how we’d collect session surveys, and actually I think it’ll work better. So all of this is to say, plan but also plan for what you can’t plan for – there will be things that happen that blindside you, that you’re not sure how to handle or solve. Stop, take a deep breath, and don’t feel that you need to limit yourself to the boundaries that you initially set for yourself. Roll with the punch and learn from it so that you can avoid the blow next time. Now, back to the conference! D

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • Automatically change resolution when not in dock

    - by jwir3
    I have Ubuntu 11.04 (yep, I know it's old news) on my Lenovo W520. At home, I have a dock with dual monitors. I have a pretty decent setup - things work almost perfectly (hence the reason I'm reluctant to upgrade... that and I'm not 100% sold on Unity). Anyway, the only annoyance I have is that when I'm on travel, I use the laptop screen. When I un-dock the laptop, I need to manually go into nvidia x-server settings and change the resolution from 'Auto' to 1920x1200, or it will think I have two screens, and my mouse pointer will be able to go way off the left side of the screen. This isn't a big deal, but I need to do it every time I restart the x-server (so if I reboot, or have to kill it, etc...) What would be really nice is if there was a way for it to automatically detect whether or not there is external monitors (which it seems to do already), and switch into the mode I select, depending on which monitors are connected. Is there any way to accomplish this? I've posted my xorg.conf file for reference. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 275.19 ([email protected]) Tue Jul 12 18:35:38 PDT 2011 #Section "Monitor" # Identifier "Monitor1" # VendorName "Lenovo" # ModelName "ThinkpadLCD" # #HorizSync 28.0 - 33.0 # #VertRefresh 43.0 - 72.0 # #Option "DPMS" #EndSection Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "DELL U2410" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" Option "RegistryDwords" "EnableBrightnessControl=1" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-6: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-5: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +1920+419, DFP-5: nvidia-auto-select +3840+0, DFP-6: nvidia-auto-select +0+0" # Removed Option "metamodes" "DFP-5: nvidia-auto-select +0+0, DFP-6: 1920x1200 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "NoLogo" "True" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "TwinView" "1" Option "metamodes" "DFP-5: nvidia-auto-select +1920+0, DFP-6: 1920x1200 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • stuck in busybox after update

    - by Lyra500
    I should start out by saying that I am not very tech savvy and have done my best to understand what is going on here from reading other people's posts and have fumbled my way through several attempted fixes without getting anywhere... I run ubuntu 12.04 on an admittedly cheap laptop from ebuyer and in spite of being cheap and not a brand laptop I have never had any major problems with it. Since updating to 12.04 when it was released the boot-up process has always been slow. Logging in takes about 30secs longer than it used to and if I closed my laptop and opened it again instead of being met with a login screen I'd be met with a black screen with a white mouse. As all of these problems had a behavior-changing fix (don't close laptop, patiently wait for login, etc.) I didn't think of reporting any of the issues. Last night when the update manager prompted me that there were updates available I duly did them as I usually do. At the time a bug report came up and I cannot remember what it said but I reported the bug as I was prompted to do. Then this morning when I started up my laptop it went through the usual sequence, GRUB, purple screen with 'ubuntu' etc. and then when it tried to boot up it went into BusyBox and I have not been able to do anything to solve this. I have tried going into the GRUB menu and starting previous versions but I come to the same point - busybox. I have borrowed another rather elderly laptop with Windows Vista to try and download Boot Repair and Ubuntu and burn them onto CD. I tried downloading boot repair five times, to laptop, CD and USB. Every time it would download and burn fine but then the windows laptop would tell me that 1% did not download properly and my laptop would not run the USB or CD upon booting. I then tried to download ubuntu and make a livecd, twice, but again was told that 1% did not download properly and again my laptop will not run ubuntu from the CD upon booting even if/when I went into the f2 menu and changed the menu priority (if that's what you call it) for booting up to make sure it was definitely trying the USB/CD first. This is the message I get from busybox: BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands I have tried 'exit' but it mentions things about 'kernal panic' and 'tried to quit init!' and 'can't find /root/dev/console : no such file'. I know that the problem is probably caused by a problem with a kernal caused by the update last night, but I couldn't honestly tell you exactly what was updated and without being able to boot from a livecd I'm not sure how to go about fixing this. As I said, I am not tech savvy and have done what I can to try and find a solution to the problem but I'm starting to feel like I'm out of options. Does anyone have any ideas?

    Read the article

  • Need Help with partitions and such Dual Booting 13.10 w/ windows 7

    - by Aymax
    so I've spent the past couple days freaking out about this and looking for answers, and I decided to resort to asking on forums. probably should have before I wasted 2 entire days. so I am trying to DUAL-INSTALL Ubuntu 13.10 with my x64 Windows 7 home premium computer. I have 6 gb of ram, 1TB hard drive, and a 3.3GHZ dual core processer (just in case it matters). I've managed to figure some things out. I've burned the ubuntu files onto a DVD, and I have been able to successfully run it off the disk. I also shrunk my Windows partition by 120Gb and partitioned that for Ubuntu (all using the windows Disk Manager). Problems: When I turn my computer on with the DVD in the tray, the computer cant find windows. it flashes a screen real quick that says something about not being able to find an operating system, and then goes to "grub" and asks what i want to do with Ubuntu. this scares me, because I don't know if that means that I will not be able to boot windows if I install Ubuntu. The Ubuntu 13.10 installer does not detect my Windows operating system. I only have the options to Erase everything on my drive, or "something else." I choose that, which brings me to 3 I don't understand the partition table. I have no idea which drive im selecting to install stuff on, much less which one to select. I tried to tell by the amount of memory partitioned off, but none of the numbers seem to be accurate. Plus, all the names are dev/sda(#). I know Ubuntu knows the name of my partition, because on the sidebars it shows the names of the different drives, including the partition I made; so why don't they use the names? I have no idea what I'm going to be erasing. I've read that I should know which is which by the file system type, but they are all NTFS, including the one I made. my only other option was FAT, none for EXT2 or any of that like people said to do. My main concerns are that of accidentally erasing windows or not being able to access windows. any feasible solution is helpful, weather it helps me with the install or to make Ubuntu see windows. I realize this question has been asked much, but i have found no feasible answers so far. I am relatively new to this, and have never installed an operating system before, so I do not know most of the jargon. please keep it relatively simple, please. I am not a programmer. Thanks.

    Read the article

  • xinetd 'connection reset by peer'

    - by ceejayoz
    I'm using percona-clustercheck (which comes with Percona's XtraDB Cluster packages) with xinetd and I'm getting an error when trying to curl the clustercheck service. /usr/bin/clustercheck: #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="clustercheckuser" MYSQL_PASSWORD="clustercheckpassword!" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" exit 0 else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" exit 1 fi /etc/xinetd.mysqlchk: # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID only_from = 10.0.0.0/8 127.0.0.1 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } When attempting to curl the service, I get a valid response (HTTP 200, text) but a 'connection reset by peer' notice at the end: HTTP/1.1 200 OK Content-Type: text/plain Percona XtraDB Cluster Node is synced. curl: (56) Recv failure: Connection reset by peer Unfortunately, Amazon ELB appears to see this as a failed check, not a succeeded one. How can I get clustercheck to exit gracefully in a manner that curl doesn't see a connection failure?

    Read the article

  • TortoiseSVN hangs in Windows Server 2012 Azure VM

    - by ZaijiaN
    Following @shanselman's article on remoting into an Azure VM for development, I spun up my own VS 2013 VM, and that image runs on WS 2012. Once I was able to remote in, I started installing all my dev tools, including Tortoise SVN 1.8.3 64bit. Things went south once I started attempting to check out code from my personal svn server. It would hang and freeze often, although sometimes it would work - I was able to partially check out projects, but I would get frequent connection time out errors. My personal svn server (VisualSVN 2.7.2) runs at home on a windows 7 machine, and I have a dyndns url pointing to it. I have also configured my router to passthrough all 443 traffic to the appropriate port on the server. I self-signed a cert and made sure it was imported into the VM cert store under trusted root authorities. I have no problems connecting to my svn server from 4-5 other computers & locations. From the Azure VM, in both IE and Chrome, I can access the repository web browser with no issues. There are no outbound firewall restrictions. I have installed other SVN add-ons for Visual Studio (AnkhSVN, VisualSVN) and attempted to connect with my svn server, with largely the same results - random and persistent connection issues (hangs/timeouts). I spun up a completely fresh WS 2008 Azure VM, and installed TortoiseSVN, and had the same results. So I'm at a loss as to what the problem is and how to fix it. Web searches on tortoisesvn and windows server issues doesn't yield any current or relevant information. At this point, i'm guessing that maybe some setting or configuration that MS Azure VM images is the culprit - although I should probably attempt to spin up my own local WS VM to rule out that it's a window server issue. Any thoughts? I hope I'm just missing something really obvious!

    Read the article

  • curl http_code of 000

    - by Mikkel Paulson
    I have a shell script that I use to monitor loading times and response codes on my live server cluster. It runs a total of 250 iterations every 5 minutes, distributed across 10 servers and 6 sites. It uses curl with the -w flag to return pertinent information which is then parsed by my shell script: curl -svw 'monitor_load_times %{time_total} %{http_code}' -b 'server=$server' -m 15 -o /dev/null $url 2>&1 This information is then parsed by a graphing script that can display a number of different responses. However, curl will occasionally return a response code of "000". When this happens, it seems to happen multiple times at once despite being distributed over many iterations: What I'm trying to work out is if this is a client-side issue that's skewing my results or if it's actually indicative of a server-side problem affecting my entire cluster. Does 000 mean that the connection was dropped? Database entries corresponding to curl iterations with that response code return "0.000" for the time_total value. All of the search results I've found for curl returning a code of 000 are related to HTTPS being unsupported, but all of my test URLs are HTTP. (The spike in 500 errors is a completely unrelated issue that affected my servers last night.)

    Read the article

  • Connecting to ItsHidden in Ubuntu 9.10 problems

    - by Ionel Bratianu
    I try to setup a VPN connection to ItsHidden on Ubuntu 9.10. I double-checked my credentials in the VPN configuration, but I don't think that this is problem. In my syslog I got these messages: Jan 11 14:38:46 NetworkManager: Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jan 11 14:38:46 NetworkManager: VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 4502 Jan 11 14:38:46 NetworkManager: VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jan 11 14:38:46 NetworkManager: VPN plugin state changed: 1 Jan 11 14:38:46 NetworkManager: VPN plugin state changed: 3 Jan 11 14:38:46 pppd[4506]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jan 11 14:38:46 NetworkManager: VPN connection 'ItsHidden' (Connect) reply received. Jan 11 14:38:46 pppd[4506]: pppd 2.4.5 started by root, uid 0 Jan 11 14:38:46 pppd[4506]: Using interface ppp0 Jan 11 14:38:46 NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Jan 11 14:38:46 NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. Jan 11 14:38:46 pppd[4506]: Connect: ppp0 /dev/pts/1 Jan 11 14:39:06 pptp[4508]: nm-pptp-service-4502 fatal[get_ip_address:pptp.c:430]: gethostbyname 'vpn.itshidden.com': HOST NOT FOUND Jan 11 14:39:06 pppd[4506]: Modem hangup Jan 11 14:39:06 pppd[4506]: Connection terminated. Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp0, iface: ppp0) Jan 11 14:39:06 pppd[4506]: Exit. Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: VPN plugin failed: 1 Jan 11 14:39:06 NetworkManager: VPN plugin state changed: 6 Jan 11 14:39:06 NetworkManager: VPN plugin state change reason: 0 Jan 11 14:39:06 NetworkManager: connection_state_changed(): Could not process the request because no VPN connection was active. Jan 11 14:39:06 NetworkManager: Policy set 'Auto eth0' (eth0) as default for routing and DNS. Jan 11 14:39:19 NetworkManager: [1263213559.003098] ensure_killed(): waiting for vpn service pid 4502 to exit Jan 11 14:39:19 NetworkManager: [1263213559.003289] ensure_killed(): vpn service pid 4502 cleaned up Because the gethostbyname is failing, I suppose that the NetworkManager doesn't know that I use proxies for accessing Internet. I'm not sure that this is the real problem. Could you tell me a solution to make gesthostbyname not failing anymore?

    Read the article

  • Can't Access TFS 2010 Beta 2 from Visual Studio 2010 Beta 2 when domain joined

    - by Brian Sullivan
    I'm experimenting with an installation of TFS 2010 Beta 2 on a virtual machine under VirtualBox running Windows Server 2008. When I've got the server in a workgroup, I can connect to it from Visual Studio just fine, as long as I provide credentials for a local user on the server machine when prompted by the "Connect to Team Foundation Server" dialog. The desktop I'm running Visual Studio on is joined to a domain. However, when I join the server to the domain, I can no longer connect to it from Visual Studio. I get a pretty generic error message: "TF31002 - Unable to connect to team foundation server". It gives me several different possible problems, including an incorrect address or an incorrect username and password. I've already added the domain Windows identity with which I'm logged on the the desktop to the TFS Admins group on the server, so I don't think it's a username/password problem. I've also tried putting the literal IP address of the server in the dialog address box instead of the machine name, but still no dice. I made sure that network discovery was enabled on the server, too, and can navigate to "\\webserver2008" in Windows Explorer without any problems. Shouldn't be a firewall problem, since the TFS install creates the appropriate exceptions in Windows Firewall. It's all a bit confusing, since it seems to work when the server is in a workgroup. Note: I'm a dev, not an admin, so there are many subtleties of server administration with which I'm not familiar. Please make no assumptions about what I may or may not have tried; what may be obvious to you may have never occurred to me. Thanks in advance!

    Read the article

  • javaws not found

    - by Hunt
    I have a server which has centos installed in it. Recently I have installed jdk 1.6 into it. When I try to run java command from shell it's working perfectly fine. Java is stored into /usr/java/jdk1.6.0_25 and path is set to /usr/bin/ when I type which java. When I tried running javaws (which comes with the jdk 1.6 only) it is showing me following error: Java Web Start splash screen process exiting ... Bad installation: JAVAWS_HOME not set: No such file or directory Executing env command prints following details: HOSTNAME=XX-XXX-XXX-XX TERM=xterm SHELL=/bin/bash HISTSIZE=1000 OLDPWD=/usr/java SSH_TTY=/dev/pts/1 USER=root LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35: JAVA_PATH=/usr/java/jre1.6.0_24/jre/bin MAIL=/var/spool/mail/root PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/java/jre1.6.0_24/jre/bin INPUTRC=/etc/inputrc PWD=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin JAVA_HOME=/usr/java/jre1.6.0_24/jre/bin LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass SHLVL=1 HOME=/root LOGNAME=root JAVAWS_HOME=/usr/java/jre1.6.0_24/bin SSH_CONNECTION=175.100.170.26 3387 64.150.190.94 22 LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/env

    Read the article

  • AclPermissionsFacet fault install SQL-2008-R2

    - by photo_tom
    While attempting to do an installation repair of SQL-2008R2, I'm failing the pre-check rules. Module that is failing is AclPermissionsFacet - with this message "The SQL Server registry keys from a prior installation cannot be modified. To continue, see SQL Server Setup documentation about how to fix registry keys." In the log file "Detail_GlobalRules.txt", I've been able to find the following error messages - 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSearch. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerSCP. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQLServer. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerAgent. When I look at these keys in the registry, all of their permissions are blank. My problem is that I cannot find any good information on how to reset these keys. This is on my new home dev and I think during the migration from my previous machine, these settings got corrupted on my new box. In reviewing the web, there doesn't seem to be good infomration. And what there is suggests using subinacl.exe. But after trying it and seeing it is an XP based program, I'm at a loss on how to continue. Configuration - Windows 7/64bit Home Edition, SQL2008R2, 6gb ram. Suggestions? Su

    Read the article

  • Several IPs for my VPS

    - by Serafim
    I bought vps on santrex.net but can't receive any reply from support. My Problem: I have 5 ip but it pings only 1!!! I can't setup DNS because I need 2 ip minimum . Could you help me to activate other my IPs? root@spnova:~# ifconfig eth0 Link encap:Ethernet HWaddr aa:00:b9:4f:19:01 inet addr:188.72.240.100 Bcast:188.72.240.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:163342 errors:0 dropped:0 overruns:0 frame:0 TX packets:13585 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:32862185 (32.8 MB) TX bytes:15189036 (15.1 MB) eth0:0 Link encap:Ethernet HWaddr aa:00:b9:4f:19:01 inet addr:188.72.240.101 Bcast:188.72.240.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr aa:00:b9:4f:19:01 inet addr:188.72.240.102 Bcast:188.72.240.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:2 Link encap:Ethernet HWaddr aa:00:b9:4f:19:01 inet addr:188.72.240.103 Bcast:188.72.240.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:3 Link encap:Ethernet HWaddr aa:00:b9:4f:19:01 inet addr:188.72.240.104 Bcast:188.72.240.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:11885 errors:0 dropped:0 overruns:0 frame:0 TX packets:11885 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8124693 (8.1 MB) TX bytes:8124693 (8.1 MB) root@spnova:~# nano /etc/network/interfaces # Auto generated eth0 interfaces auto eth0 lo iface eth0 inet static address 188.72.240.100 netmask 255.255.255.0 up route add -net 188.72.225.0 netmask 255.255.255.0 dev eth0 up route add default gw 188.72.225.1 iface lo inet loopback auto eth0:0 iface eth0:0 inet static address 188.72.240.101 netmask 255.255.255.0 auto eth0:1 iface eth0:1 inet static address 188.72.240.102 netmask 255.255.255.0 auto eth0:2 iface eth0:2 inet static address 188.72.240.103 netmask 255.255.255.0 auto eth0:3 iface eth0:3 inet static address 188.72.240.104 netmask 255.255.255.0

    Read the article

  • API numbers don't match on compiled PHP extension

    - by tixrus
    I'm trying to get GD into my PHP. I recently installed PHP5.3.0 on my system running Mac Leopard using mac ports. It did not come with the gd module. So I downloaded gd, compiled it as an extension module as per http://www.kenior.ch/macintosh/adding-gd-library-for-mac-os-x-leopard, made php.ini point to it, restarted apache etc. But no GD. So in apache error log it says PHP Warning: PHP Startup: gd: Unable to initialize module\nModule compiled with module API=20060613\nPHP compiled with module API=20090115\nThese options need to match\n in Unknown on line 0 So a bit of googling says I should not use the phpize I have before configuring and making these. I should use a new one called phpize5. I surely don't have any such thing. Unless its packed up inside something else in my php5.3. distro. Where do you get it. In Ubuntu I could just run sudo apt-get install php-dev, (apparently) and it would just appear by magic. At least that's what the webpage said. Unfortunately I am running MacOSX version Leopard. How can I build this GD module on Leopard so that it will match the API number in my PHP?

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • The frame buffer layout of the current display cannot be made to match...

    - by adambox
    I get this error when I have VM running in VMWare Workstation on my work PC and try Remote Desktop-ing in from my iMac at home. I just upgraded VMWare Workstation to 7.0 from 6.0 and now I'm getting it when I try to resume my VM at work. It then asks me the scary question of whether I want to preserve or discard the suspended state. I don't want to lose stuff! ack! Update I backed up my VM and tried hitting that "discard" button and the result was a reboot of the VM. I then tried restoring to a snapshot, and none of my snapshots work! Is there anyway to fix this so I can run 7 but still have my old snapshots? The frame buffer layout of the current display cannot be made to match the frame buffer layout stored in the snapshot. The dimensions of the frame buffer in the snapshot are: Max width 3200, Max height 1770, Max size 22659072. The dimensions of the frame buffer on the current display are: Max width 3200, Max height 1600, Max size 20512768. Error encountered while trying to restore the virtual machine state from file "C:\Documents and Settings\adam\My Documents\My Virtual Machines\dev\Windows XP Professional.vmss". What do I do so I don't break things horribly?

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >