Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 97/637 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Scrum: What if the Product Owner has tasks?

    - by Lauren J
    I have just started working with a team that has picked up some aspects of Scrum (two week timeboxing) but not others (the team does not currently agree to all estimates or to the number of points in a sprint, but I'll change this soon.) The product owner is also a technical resource (scientist) with some development background. Is it appropriate to have the product owner's tasks (which mostly involve research) mixed in with the team's tasks (some of which are research and some development).

    Read the article

  • Corrupted File System on Dual HD/Dual Boot System

    - by Troy
    I have the following system set up: 2 drives, 1 TB each, one with Windows 7 and the other with what used to be Ubuntu 11.x After an update my system became corrupted and now the file system is apparently corrupt. The Ubuntu drive is /dev/sda2, the Windows 7 is /dev/sda1. I've tried fsck /dev/sda2 -t ext3 and that does nothing. I'm not sure what to do at this point. I don't even mind wiping the /dev/sda2 drive clean, so it will at least accept a completely new installation of Ubuntu. I just don't know how to do that. Please help. Thank you

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • Core debugger enhancements in VS2010

    Since my team offers "parallel debugging", we refer to the team delivering all the other debugging features as the "core debugger" team. They have published a video of new VS2010 debugger features that I encourage you to watch to find out about enhancements with DataTips, breakpoints, dump debugging (inc. IL interpreter) and Threads window.The raw list of features with short description is also here. Comments about this post welcome at the original blog.

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • Mass bulk add domains to web hosting service (possible?)

    - by Scott
    I was wondering if anyone does bulk adding of domains to your web hosting provider (Amazon, Linode, Rackspace, etc). I am thinking of creating a product that allows user to host their site on top of my web hosting and want something that can allow me to bulk add domains (and point DNS to my web hosting DNS) with as little manual work as possible. I am thinking of getting a VPS to do this. Is this possible even? Thanks Scott

    Read the article

  • Why is my partition claiming to be out of space?

    - by Dr C
    My file system claims to only have 4.5 GB left. While my OS (a folder with in file system) still has 75.2 GB left. I put something near 130 GB on my Ubuntu partition, it should have enough space. I confirmed that I can put things in OS that exceed the space in available file systems, but that makes no sense, OS is listed as a folder inside of file system, why would it have more space than it's parent folder? What is going on? Here is the output of df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda5 113773200 103741440 4252408 97% / udev 2004600 4 2004596 1% /dev tmpfs 804756 848 803908 1% /run none 5120 0 5120 0% /run/lock none 2011884 436 2011448 1% /run/shm /dev/sda2 127526908 54045584 73481324 43% /media/OS /dev/sda3 39144708 89016 39055692 1% /media/DATA`

    Read the article

  • Unable to access “105 GB Volume”

    - by user170924
    Error mounting /dev/sda2 at /media/fehr/8CBE6431BE6415CC: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sda2" "/media/fehr/8CBE6431BE6415CC"' exited with non-zero exit status 14: Windows is hibernated, refused to mount. Failed to mount '/dev/sda2': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option.

    Read the article

  • Visual Studio Extension: Web Essentials

    - by BizTalk Visionary
    To quote Scott Hanselman…. Visual Studio 2010 is really extensible and that's allowed many folks on the team to try out new features for Web Development without having to rebuild Visual Studio itself. One of those "playground" extensions is called "Web Essentials" by Mads Kristensen. Mads handles HTML5 and CSS3 tools for our team. You might remember Mads from when we released the Web Standards Update a few months back. Get it here: Scott Nanselman blog...

    Read the article

  • Update: TFS Power Tools March 2011

    - by Enrique Lima
    There is an update available for the TFS Power Tools and the TFS Build Power Tools. Among the updates to the Tools: Changes to the Team Foundation Server Backups Add-In for TFS Admin Console. Added functionality to the Windows Shell Extension. Changes to the tfpt command line tool that allows you to script build management commands. For a full detail of the changes, read Brian Harry’s post  http://blogs.msdn.com/b/bharry/archive/2011/03/03/mar-11-team-foundation-server-power-tools-are-available.aspx To download the Power Tools: Team Foundation Server Power Tools Team Foundation Server Build Extensions Power Tool

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • Do we need to adopt a black-box asset our project is inheriting from its predecessor?

    - by Tom Anderson
    Our client has an eCommerce site which was developed by an in-house team, and is now showing its age. I work for a firm brought in as external contractors to build a replacement. Part of the current site is a Flash viewer applet which displays media about the product - zoom-able images, 360-degree views, movies, and so on. We need to show the same media the current site does, so we are simply reusing the viewer. The viewer is embedded on a page in the usual way, and told what media to show by means of an XML file it loads from our server, which is pretty simple for us to generate. We've got this working; it was pretty straightforward. But what else do we need to do? The thing is, as far as we're concerned, the viewer is a binary blob which is served from the client's content-distribution network. We embed it, feed it some XML, and it does its job, but we have no power over its internals. It's completely opaque to us - a black box. We can use it to do what it does, but we can't change it, so if we ever need to do something different, we're stuffed. We're building this site for the client, and when we're done, we'll hand it over for them to maintain. We won't be doing the maintenance ourselves. There's a small team within the client who are working as part of our team, and who will be the ones doing the maintenance. That team only includes one person from the team that built the old site, and it's not someone who knows the image viewer. The people who do know the image viewer are not slated to join our team when our system replaces theirs - they'll be moved to other projects. The documentation on the viewer is extremely thin, and as far as i know doesn't cover the internals at all. My worry is that if someone doesn't take some positive action, all knowledge of the internal workings of the viewer - even down to where the source code for it is - will be lost. It's possible it already has been. Is this something to worry about? If so, whose job is it to worry about it? What should they do about it once they've got worried?

    Read the article

  • Managing Kindle Fire with on 12.04 via Micro-USB

    - by pirtle
    To begin, I have read both Is there a way to get a Kindle Fire to work with 12.04? and How can I transfer files to a Kindle Fire with a Micro-USB cable? My problem is that I am unable to mount my Kindle Fire in order to add books to it. I have installed calibre, but it is unable to manage any devices until the computer itself has recognized it. The latter post had an excellent answer (provided by @jeremiah) that was making some progress. Unfortunately, I think I don't know enough about the -t flag used with mount. This is what I've done... Ran dmesg to locate the device: [ 3.920886] sd 6:0:0:0: [sdb] Attached SCSI removable disk Confirmed it's location: $ sudo ls -l /dev/disk/by-id lrwxrwxrwx 1 root root 9 Aug 18 15:52 usb-Amazon_Kindle_3C6C002600000001-0:0 -> ../../sdb So we know that my Kindle is recognized on /dev/sdb. I then used the mount command suggested by @jeremiah: $ sudo mount -t ext3 /dev/sdb/ /mnt/kindle/ mount: no medium found on /dev/sdb The same error occurs for sudo mount /dev/sdb /mnt/kindle. Note: I have created the 'kindle' directory in 'mnt' Any suggestions?

    Read the article

  • Announcing the ASP.NET MVC 3 Release Candidate

    In this article, Scott provides a detailed overview of the features included with ASP.NET MVC 3 Release Candidate. He examines some of the key features such as Razor Intellisense within Visual Studio, NuGet Package Manager, Partial Page Output Caching, Unobtrusive JavaScript and Validation, Remote Validator and Granular Request Validation. He also provides the links to the PDC Talk rendered by Scott Hanselman regarding ASP.NET MVC 3 including new improvements shipped with the ASP.NET MVC 3 Release Candidate.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • Ubuntu 12.04 + Wifi not working

    - by user171154
    i'm having problems connecting over wireless. At the moment, I'm using wicd. It seems to get stuck on "Verifying AP association...". Without wicd I can get the connection up and ping the Net - but if I take eth0 down (ifconfig eth0 down), my wireless goes away too (same result if I unplug the wire instead). wicd is the only way I can bring eth0 back (which is the main reason I'm using it) - ifconfig eth0 and/or ifup eth0 do not re-enable the connection (I just discovered it leaves out the gateway. Adding the gateway back in re-enables the connection including wifi; I didn't want to delete the info about wicd above in case it gives someone an idea.) Doing it manually, despite the errors (which it would be nice to also resolve) - allows me to ping the outside world: ifup wlan0 ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument ssh stop/waiting ssh start/running, process 17336 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_req=1 ttl=43 time=48.8 ms 64 bytes from 8.8.8.8: icmp_req=2 ttl=43 time=47.9 ms 64 bytes from 8.8.8.8: icmp_req=3 ttl=43 time=48.7 ms 64 bytes from 8.8.8.8: icmp_req=4 ttl=43 time=53.2 ms --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 47.975/49.711/53.235/2.063 ms # iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"TPLINK" Mode:Managed Frequency:2.427 GHz Access Point: 64:66:xx:xx:xx:22 Bit Rate=108 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:3 Missed beacon:0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: f0:7d:68:c1:b4:13 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-67-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:dfbf0000-dfbfffff ip route default via 192.168.0.1 dev eth0 default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 (For the record, I have no idea what the 169.254.0.0 address is doing there.) uname -a 3.2.0-67-generic-pae #101-Ubuntu SMP Tue Jul 15 18:04:54 UTC 2014 i686 i686 i386 GNU/Linux lshw -C network *-network description: Ethernet interface product: NetXtreme BCM5751 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 01 serial: 00:11:11:59:fc:09 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5751-v3.23a ip=192.168.0.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:dfcf0000-dfcfffff *-network description: Wireless interface product: AR5418 Wireless Network Adapter [AR5008E 802.11(a)bgn] (PCI-Express) vendor: Qualcomm Atheros physical id: 0 /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback source /etc/network/interfaces.eth0 source /etc/network/interfaces.wlan0 /etc/network/interfaces.eth0 #Main Interface auto eth0 iface eth0 inet static address 192.168.0.102 netmask 255.255.255.0 gateway 192.168.0.1 /etc/network/interfaces.wlan0 auto wlan0 iface wlan0 inet static address 192.168.0.12 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 netmask 255.255.255.0 wpa-driver wext wpa-ssid TPLINK wpa-ap-scan 1 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk dca1badb5fd4e9axxx4xxdaaxxfa91xx610bxx6a7d57ef67af9809dxx6af42e39 /etc/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="TPLINK" psk="my password" key_mgmt=WPA-PSK proto=RSN pairwise=CCMP group=CCMP } ifdown eth0 ifdown: interface eth0 not configured ifconfig eth0 Link encap:Ethernet HWaddr 00:11:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:213690 errors:0 dropped:0 overruns:0 frame:0 TX packets:155266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:220057808 (220.0 MB) TX bytes:21137696 (21.1 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196412 errors:0 dropped:0 overruns:0 frame:0 TX packets:196412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270697 (153.2 MB) TX bytes:153270697 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11335 errors:0 dropped:0 overruns:0 frame:0 TX packets:7287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2563290 (2.5 MB) TX bytes:855746 (855.7 KB) ifconfig eth0 down ifconfig eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:192 (192.0 B) TX bytes:94 (94.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196418 errors:0 dropped:0 overruns:0 frame:0 TX packets:196418 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270871 (153.2 MB) TX bytes:153270871 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11359 errors:0 dropped:0 overruns:0 frame:0 TX packets:7293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2565482 (2.5 MB) TX bytes:856363 (856.3 KB) ip route default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms ping -I eth0 -c 3 router PING router (192.168.0.1) from 192.168.0.102 eth0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms ping -I wlan0 -c 3 router PING router (192.168.0.1) from 192.168.0.12 wlan0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms Let me know if you need more info. Thank you in advance.

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    Hi, So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • How to save during real-time collaboration

    - by dev.e.loper
    I want multiple users to edit same document. Problem I'm facing is when a new user joins, he might see an outdated document. How do I make sure that new users get most recent changes? Some solutions I thought of: Save on every change. I don't like this solution because it will slow things down on UI and put load on db. When new user joins, trigger save on all other clients. After other clients saved, load document. With this there can be inconsistency still. Any other suggestions would be helpful.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • how to limit upload bandwidth per user in linux?

    - by Gihan Lasita
    Can anyone provide the tc command to limit upload bandwidth per user in Debian Lenny? I found that to mark packets per user with iptables I can use the following command iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner testuser -j MARK --set-mark 500 but I have no idea how to use tc update by running following commands, i managed to limit testuser upload bandwidth to 10Mbit iptables -t mangle -N HTB_OUT iptables -t mangle -I POSTROUTING -j HTB_OUT iptables -t mangle -A HTB_OUT -j MARK --set-mark 30 iptables -t mangle -A HTB_OUT -m owner --uid-owner testuser -j MARK --set-mark 10 tc qdisc replace dev eth0 root handle 1: htb default 30 tc class replace dev eth0 parent 1: classid 1:1 htb rate 10Mbit burst 5k tc class replace dev eth0 parent 1:1 classid 1:10 htb rate 10Mbit ceil 10Mbit tc qdisc replace dev eth0 parent 1:10 handle 10: sfq perturb 10 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 now the problem is, i do not want to limit testuser's FTP bandwidth but by running above commands FTP speed also limited to 10Mbit. Regards

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >