Search Results

Search found 6729 results on 270 pages for 'practical answers'.

Page 200/270 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • An invitation to join a JDeveloper and ADF productivity clinic (and more!) at KScope

    - by Chris Muir
    Would you like a chance to influence Oracle's decisions on tool usability and productivity? If you're attending ODTUG's Kaleidoscope conference this year in San Antonio, Oracle would like to invite you to participate in our Usability Activity Research and separately our JDeveloper and ADF Productivity Clinics with our experienced user experience teams.  The teams are keen to hear what you have to say about your experiences with our tools in general and specifically JDeveloper and ADF.  The details of each event are described below. Invitation to Usability Activity - Sunday June 24th to Wednesday June 27th Oracle is constantly working on new tools and new features for developers, and invites YOU to become a key part of the process!  As a special addition to Kscope 12, Oracle will be conducting onsite usability research in the Alyssum room, from Sunday June 24 to Wednesday June 27. Usability activities are scheduled ahead of time for participants' convenience.  If you would like to take part, please fill out this form to let us know of the session(s) that you would like to attend and your development experience. You will be emailed with your scheduled session before the start of the conference. JDeveloper and ADF Productivity Clinic - Thursday June 28th Are you concerned that Java, Oracle ADF or JDeveloper is difficult? Is JDeveloper making you jump through hoops?  Do you hate a particular dialog or feature of JDeveloper? Well, come and get things off your chest! Oracle is hosting a product management and user experience clinic where we want to hear about your issues and concerns. What's difficult to use?  What doesn't work the way you want, and how would you want it to work?  What isn't behaving like your current favorite tool?  If we can't help on you the spot, we'll take your feedback and use it to improve the product experience.  A great opportunity to get answers, or get improvements. Drop by the Alyssum room, anytime from 8:30 to 10:30 on Thursday, June 28. We look forward to seeing you at KScope soon! 

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • How can I set the date format to my country setting?

    - by Jamina Meissner
    I am German, but I use only English software. Hence, I am also using English Ubuntu. It's not because I don't know how to install German Ubuntu. It's because I prefer to work with English software environment. However, I would like to keep date & time format in German format, just as I use a German keyboard layout in English Ubuntu. I can set the time format to 24h time. But how can I set the date format to German time format? It is irritating for me to have the day number before the time numbers: In other words, instead of "Oct 14 15:16" I want it to display "14 Okt" or (if only English language is available) "14 Oct 15:16" or "14th Oct 15:16". At least, the number of the day should be displayed before the month. In Windows, it was no problem to choose time/date/currency settings according to a chosen country. Where can I do this in Ubuntu? The best would be if I could freely enter the date/time format myself with variables (DD.MM hh.mm.ss etc). I found answers for Ubuntu 11.04, but not for Ubuntu 12.04. I am using Ubuntu 12.04, 64-bit. Keep in mind that I am a beginner. So I'd like to be able to do this via GUI, if possible. EDIT: I found the answer in a forum. Go to System Settings... and choose Language Support. There are two tabs, Language and Reginal Formats. You are by default on the Language tab. On the Language tab, click Install / Remove Languages. A window with a list of languages opens. Mark the language(s) you want to add for your time/date/currency format. Click Apply Changes. Ubuntu will now download and install the additional language files, as well as help files of other applications in this language. So don't be irritated. When Ubuntu has finished applying the changes, switch to Regional Formats tab. (Do not change the Language for menus and windows on the Language tab if you only want to change the date/time/unit format). There you can choose from the dropdown list the language for your preferred format for date/time/currency/unit. Log out and log in again to have the changes take effect.

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • What economic books would you suggest for learning about economic valuation of goods and simulations thereof?

    - by Rushyo
    I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to (Addendum: somewhat) correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling? EDIT: I think I should underline that I'm not looking for optimal solutions. I'm looking to make the actors impulsive - making rudimentary decisions based on fuzzy inputs about what they care about or don't. I'm aiming to understand the problem area better not derive answers. All the textbooks I've found seem to be about real-world economics or how to solve complex theoretical problems, neither of which are terribly relevant to the actor's decision making.

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Unable to ping inside or outside network with default gateway 0.0.0.0

    - by agentroadkill
    I've been around here before and I could usually piece together everything to more or less get myself up and running, but this time I'm truly stumped. I'm trying to connect my new 14.04 install to a network, and I'm forced to be behind my college's router. Now I've tested the vary cable that is right now plugged into my Ubuntu box on a Windows, Mac OS X, and even my friend's Ubuntu 14.04 box, and they all connect no problem. I've been trying to track this down for about two days, but every time I get close to it, the bug jumps to some other piece of my connection. Anyway, as it sits ifconfig -a gives: eth2 Lninkencap:Ethernet HWaddr:00:1f:bc:08:31:1d inet addr:10.32.51.51 Bcast:10.32.51.155 Mask: 255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 RX bytes:0 TX bytes:0 as well as the local loopback, but I'm assuming that is not an issue here. sudo dhclient -v eth2 returns: Listening on LPF/<hardware address of my integrated NIC, above> Sending on <same> Sending on Socket/fallback DHCPREQUEST of 10.32.51.51 on eth2 to 255.255.255.255 port 67 (xid=0x6f4a66ba) <two more lines of same> DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x156f9fb4) <many more of above with varying intervals> No DHCPOFFERS received. Trying recorded lease 10.32.51.51 RTNETLINK answers: File exists bound: renewal in <large number> seconds If I then try ping 8.8.8.8, I get: connect: Network is unreachable /etc/resolv.conf only contains the two lines telling you not to edit it, while /etc/network/interfaces only has the loopback interface block in it. I've tried commenting out the "option rfc3442" line in /etc/dhcp/dhclient.conf which seemed to fix this issue for many people, as well as adding the line send vendor-class-indentifier "MSFT5.0" to dhclient.conf as well to tell the router I'm a windows box, in case they don't like Linux. Finally, route -n reveals: Destination Gateway Genmask Flags Metric Ref Use Iface 10.32.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 I would like to apologize in advance for the doubtless butchered text alignment, but I'm obviously typing this all by hand, reading from the terminal as I type commands. I'm hoping this is an interesting problem, and not something I blithely stumbled past in my (apparent) over-confidence. TIA! Quick addendum before posting: The activity light on the ethernet port are lit and one blinks during boot, but they rarely (and seemingly randomly) do so afterwards (both are dark) even while running dhclient in the foreground. When I had the Ubuntu box tethered to my MacBook earlier, I got what looked like a normal power/uplink blinking pattern, but was unable to ping one from the other.

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • "static" as a semantic clue about statelessness?

    - by leoger
    this might be a little philosophical but I hope someone can help me find a good way to think about this. I've recently undertaken a refactoring of a medium sized project in Java to go back and add unit tests. When I realized what a pain it was to mock singletons and statics, I finally "got" what I've been reading about them all this time. (I'm one of those people that needs to learn from experience. Oh well.) So, now that I'm using Spring to create the objects and wire them around, I'm getting rid of static keywords left and right. (If I could potentially want to mock it, it's not really static in the same sense that Math.abs() is, right?) The thing is, I had gotten into the habit of using static to denote that a method didn't rely on any object state. For example: //Before import com.thirdparty.ThirdPartyLibrary.Thingy; public class ThirdPartyLibraryWrapper { public static Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... ThirdPartyLibraryWrapper.newThingy(input); //After public class ThirdPartyFactory { public Thingy newThingy(InputType input) { new Thingy.Builder().withInput(input).alwaysFrobnicate().build(); } } //called as... thirdPartyFactoryInstance.newThingy(input); So, here's where it gets touchy-feely. I liked the old way because the capital letter told me that, just like Math.sin(x), ThirdPartyLibraryWrapper.newThingy(x) did the same thing the same way every time. There's no object state to change how the object does what I'm asking it to do. Here are some possible answers I'm considering. Nobody else feels this way so there's something wrong with me. Maybe I just haven't really internalized the OO way of doing things! Maybe I'm writing in Java but thinking in FORTRAN or somesuch. (Which would be impressive since I've never written FORTRAN.) Maybe I'm using staticness as a sort of proxy for immutability for the purposes of reasoning about code. That being said, what clues should I have in my code for someone coming along to maintain it to know what's stateful and what's not? Perhaps this should just come for free if I choose good object metaphors? e.g. thingyWrapper doesn't sound like it has state indepdent of the wrapped Thingy which may itself be mutable. Similarly, a thingyFactory sounds like it should be immutable but could have different strategies that are chosen among at creation. I hope I've been clear and thanks in advance for your advice!

    Read the article

  • Tech Talk: Can we put "Simple" in Application Business Process Management?

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Customers are challenged with looking for answers to basic questions: Why can't it be simpler to connect applications in cloud to applications on-premise? How do I make my business more responsive and agile? How do I create end-to-end processes joining multiple applications? Tune in to this Tech Talk session with Amit Zavery, Vice President of Product Management for Oracle Fusion Middleware as he discusses the relevance and importance of business process management and how Oracle BPM Suite is benefiting customers across all industries. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com

    Read the article

  • Career advice: stay with PHP or start a new career in something else ( .Net?)

    - by Christian P
    I'm planning on moving to NY in 6-12 months tops, so I'm forced to find a new job. When I'm planing to start my life in another city it's also probably a good time to think about career changes. I've found a lot of different opinions about PHP vs .Net vs Java and this is not topic here. I don't want to start a new fight about which language is better. Knowing programming language is not the most important thing for being a software developer. To be a really good developer you need to know OOP, design patterns, testing... and language is just a tool to make things happen. So back to my question. I have mixed experience in IT - 1 year as an IT support guy (Windows administration and support), around 2 years of experience in embedded programming (VB.Net 2005) and for the last 2 years I'm working with PHP/MySQL. I have worked with Magento web shop, assisted in some projects in Symfony, modified few Drupal sites. My main concerns are following: Do I continue to improve my skills in PHP e.g. to start learning some major PHP framework like Zend, Symfony maybe get some PHP certification. Or do I start learning .NET or Java. I'm more familiar to .NET so I'll probably choose it if choice falls between .NET and Java ( or you could convince me to choose Java :). Career-wise, I don't know what is the best choice. Learning new framework and language is more time consuming then improving my existing skills in PHP. But with .NET you have a lot of possibilities (Windows 7 Phone development, Silverlight, WPF) and possibly bigger chances to find better jobs. PHP jobs are less payed then .NET, at least, according to my researches (correct me if I'm wrong). But if I start now with .NET I'm just a beginner and my salary will be low. I need at least 2+ years of experience in some language to even try to find some job that is paying higher than $50-60k in NY. My main goal in next 2-3 years is to try to find a job in a $60-80k category. Don't get me wrong, I'm not just chasing money, but money is an important factor when you're trying to start a family. I'm 27 years old and I feel that there isn't a lot of room for wrong decisions regarding my career, so any advice will be very welcome. Update Thank you all for spending time to help me with my problem. All of the answers and comments have been very helpful. I have decided to stick with PHP but also to learn C# and Silverlight 4. We'll see where the life will take me.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Generating tileable terrain using Perlin Noise [duplicate]

    - by terrorcell
    This question already has an answer here: How do you generate tileable Perlin noise? 9 answers I'm having trouble figuring out the solution to this particular algorithm. I'm using the Perlin Noise implementation from: https://code.google.com/p/mikeralib/source/browse/trunk/Mikera/src/main/java/mikera/math/PerlinNoise.java Here's what I have so far: for (Chunk chunk : chunks) { PerlinNoise noise = new PerlinNoise(); for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * x / (float)CHUNK_SIZE_WIDTH, i * y / (float)CHUNK_SIZE_HEIGHT, CHUNK_SIZE_WIDTH, CHUNK_SIZE_HEIGHT); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } } Which produces something like this: Each chunk is made up of CHUNK_SIZE number of tiles, and each tile has a TILE_SIZE_WIDTH/HEIGHT. I think it has something to do with the inner-most for loop and the x/y co-ords given to the noise function, but I could be wrong. Solved: PerlinNoise noise = new PerlinNoise(); for (Chunk chunk : chunks) { for (int y = 0; y < CHUNK_SIZE_HEIGHT; ++y) { for (int x = 0; x < CHUNK_SIZE_WIDTH; ++x) { int index = get1DIndex(y, CHUNK_SIZE_WIDTH, x); float val = 0; float xx = x * TILE_SIZE_WIDTH + chunk.x; float yy = y * TILE_SIZE_HEIGHT + chunk.h; int w = CHUNK_SIZE_WIDTH * TILE_SIZE_WIDTH; int h = CHUNK_SIZE_HEIGHT * TILE_SIZE_HEIGHT; for (int i = 2; i <= 32; i *= i) { double n = noise.tileableNoise2(i * xx / (float)w, i * yy / (float)h, w, h); val += n / i; } // chunk tile at [index] gets set to the colour 'val' } } }

    Read the article

  • Identifying which pattern fits better.

    - by Daniel Grillo
    I'm developing a software to program a device. I have some commands like Reset, Read_Version, Read_memory, Write_memory, Erase_memory. Reset and Read_Version are fixed. They don't need parameters. Read_memory and Erase_memory need the same parameters that are Length and Address. Write_memory needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this sendCommand, waitForResponse, treatResponse. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len and Address. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len, Address and Data. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • sony vaio WLAN problem using 12.04

    - by Fredrik
    I'm unable to get my WLAN to work for my Sony Vaio model VPCF23C5E No problem connecting from windows, smartphones etc. $ sudo lshw -C network; lsb_release -a; uname -a; sudo rfkill list; dmesg | grep -i firm *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 64:27:37:92:99:0f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-29-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:16 memory:f7000000-f707ffff memory:f7080000-f708ffff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 06 serial: f0:bf:97:dd:b2:bd size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-1.fw ip=192.168.1.4 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:50 ioport:9000(size=256) memory:e2104000-e2104fff memory:e2100000-e2103fff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise Linux siriedit 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no [ 1.287449] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored [ 18.273582] [Firmware Bug]: ACPI(NGFX) defines _DOD but not _DOS Seen some proposed solution e.g. Wireless network cannot be enabled for Sony VAIO E series but answers there don't solve my problem... I'm out of ideas :-( What else can I check? update $ ifconfig -a eth0 Link encap:Ethernet HWaddr f0:bf:97:dd:b2:bd inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::f2bf:97ff:fedd:b2bd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5072 errors:0 dropped:0 overruns:0 frame:0 TX packets:4444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4568435 (4.5 MB) TX bytes:610624 (610.6 KB) Interrupt:50 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1498 errors:0 dropped:0 overruns:0 frame:0 TX packets:1498 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:114156 (114.1 KB) TX bytes:114156 (114.1 KB) wlan0 Link encap:Ethernet HWaddr 64:27:37:92:99:0f inet addr:192.168.1.5 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6627:37ff:fe92:990f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1277 errors:0 dropped:0 overruns:0 frame:0 TX packets:472 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:483155 (483.1 KB) TX bytes:61031 (61.0 KB)

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >