Search Results

Search found 3440 results on 138 pages for 'cost estimation'.

Page 23/138 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Parse JSON into a ListView friendly output

    - by Thomas McDonald
    So I have this JSON, which then my activity retrieves to a string: {"popular": {"authors_last_month": [ { "url":"http://activeden.net/user/OXYLUS", "item":"OXYLUS", "sales":"1148", "image":"http://s3.envato.com/files/15599.jpg" }, { "url":"http://activeden.net/user/digitalscience", "item":"digitalscience", "sales":"681", "image":"http://s3.envato.com/files/232005.jpg" } { ... } ], "items_last_week": [ { "cost":"4.00", "thumbnail":"http://s3.envato.com/files/227943.jpg", "url":"http://activeden.net/item/christmas-decoration-balls/75682", "sales":"43", "item":"Christmas Decoration Balls", "rating":"3", "id":"75682" }, { "cost":"30.00", "thumbnail":"http://s3.envato.com/files/226221.jpg", "url":"http://activeden.net/item/xml-flip-book-as3/63869", "sales":"27", "item":"XML Flip Book / AS3", "rating":"5", "id":"63869" }, { ... }], "items_last_three_months": [ { "cost":"5.00", "thumbnail":"http://s3.envato.com/files/195638.jpg", "url":"http://activeden.net/item/image-logo-shiner-effect/55085", "sales":"641", "item":"image logo shiner effect", "rating":"5", "id":"55085" }, { "cost":"15.00", "thumbnail":"http://s3.envato.com/files/180749.png", "url":"http://activeden.net/item/banner-rotator-with-auto-delay-time/22243", "sales":"533", "item":"BANNER ROTATOR with Auto Delay Time", "rating":"5", "id":"22243"}, { ... }] } } It can be accessed here as well, although it because it's quite a long string, I've trimmed the above down to display what is needed. Basically, I want to be able to access the items from "items_last_week" and create a list of them - originally my plan was to have the 'thumbnail' on the left with the 'item' next to it, but from playing around with the SDK today it appears too difficult or impossible to achieve this, so I would be more than happy with just having the 'item' data from 'items_last_week' in the list. Coming from php I'm struggling to use any of the JSON libraries which are available to Java, as it appears to be much more than a line of code which I will need to deserialize (I think that's the right word) the JSON, and they all appear to require some form of additional class, apart from the JSONArray/JSONObject script I have which doesn't like the fact that items_last_week is nested (again, I think that's the JSON terminology) and takes an awful long time to run on the Android emulator. So, in effect, I need a (preferably simple) way to pass the items_last_week data to a ListView. I understand I will need a custom adapter which I can probably get my head around but I cannot understand, no matter how much of the day I've just spent trying to figure it out, how to access certain parts of a JSON string..

    Read the article

  • Perl program for extracting the functions alone in a Ruby file

    - by thillaiselvan
    Hai all, I am having the following Ruby program. puts "hai" def mult(a,b) a * b end puts "hello" def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end AltimaCost, AltimaMpg = getCostAndMpg puts "AltimaCost = #{AltimaCost}, AltimaMpg = {AltimaMpg}" I have written a perl script which will extract the functions alone in a Ruby file as follows while (<DATA>){ print if ( /def/ .. /end/ ); } Here the <DATA> is reading from the ruby file. So perl prograam produces the following output. def mult(a,b) a * b end def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end But, if the function is having block of statements, say for example it is having an if condition testing block means then it is not working. It is taking only up to the "end" of "if" block. And it is not taking up to the "end" of the function. So kindly provide solutions for me. Example: def function if x > 2 puts "x is greater than 2" elsif x <= 2 and x!=0 puts "x is 1" else puts "I can't guess the number" end #----- My code parsing only up to this end Thanks in Advance!

    Read the article

  • How to eliminate duplicate rows?

    - by Odette
    hi guys im back with my original query and i just have one question please (ps: I know i have to vote and regsiter and I promise I will do that today) With the following query (t-sql) I am getting the correct results, except that there are duplicates now. I have been reading up and think I can use the PARTITION BY syntax - can you please show me how to incorporate the PARTITION BY syntax? WITH CALC1 AS (SELECT OTQUOT, OTIT01 AS ITEMS, ROUND(OQCQ01 * OVRC01,2) AS COST FROM @[email protected] WHERE OTIT01 < '' UNION ALL ... SELECT OTQUOT, OTIT10 AS ITEMS, ROUND(OQCQ10 * OVRC10,2) AS COST FROM @[email protected] WHERE OTIT10 < '' ) SELECT OTQUOT, DESC, ITEMS, RN FROM ( SELECT OTQUOT, ITEMS, B.IXRPGP AS GROUP, C.OTRDSC AS DESC, COST, ROW_NUMBER() OVER (PARTITION BY OTQUOT ORDER BY COST DESC) AS RN FROM CALC1 AS A INNER JOIN @[email protected] AS B ON (A.ITEMS = B.IKITMC) INNER JOIN DATAGRP.GDSGRP AS C ON (B.IXRPGP = C.OKRPGP) ) T RESULTS: 60408169 FENCING GNCPDCTP18BGBG 1 60408169 FENCING CGIFESHPD1795BG 2 60408169 FENCING GTTCGIBG 3 60408169 FENCING GBTCGIBG 4 How do I get rid of the duplicates? thanks Bill and all the others for your help (I am still learning!)

    Read the article

  • Need a workaround to filter on related model and aggregated fields in Django

    - by parxier
    I opened a ticket for this problem. In a nutshell here is my model: class Plan(models.Model): cap = models.IntegerField() class Phone(models.Model): plan = models.ForeignKey(Plan, related_name='phones') class Call(models.Model): phone = models.ForeignKey(Phone, related_name='calls') cost = models.IntegerField() I want to run a query like this one: Phone.objects.annotate(total_cost=Sum('calls__cost')).filter(total_cost__gte=0.5*F('plan__cap')) Unfortunately Django generates bad SQL: SELECT "app_phone"."id", "app_phone"."plan_id", SUM("app_call"."cost") AS "total_cost" FROM "app_phone" INNER JOIN "app_plan" ON ("app_phone"."plan_id" = "app_plan"."id") LEFT OUTER JOIN "app_call" ON ("app_phone"."id" = "app_call"."phone_id") GROUP BY "app_phone"."id", "app_phone"."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan"."cap" and errors with: ProgrammingError: column "app_plan.cap" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan".... Is there any workaround apart from running raw SQL?

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • Selecting value from array results

    - by Swodahs
    Being new to learning PHP I am having trouble with understanding how to select/echo/extract a value from array result a API script returns. Using the standard: echo "<pre>"; print_r($ups_rates->rates); echo "</pre>"; The results returned look like this: Array ( [0] => Array ( [code] => 03 [cost] => 19.58 [desc] => UPS Ground ) [1] => Array ( [code] => 12 [cost] => 41.69 [desc] => UPS 3 Day Select ) [2] => Array ( [code] => 02 [cost] => 59.90 [desc] => UPS 2nd Day Air ) ) If I am only needing to work with the values of the first array result: Code 3, 19.58, UPS Ground --- what is the correct way to echo one or more of those values? I thought: $test = $ups_rates[0][cost]; echo $test; This is obviously wrong and my lack of understanding the array results isn't improving, can someone please show me how I would echo an individual value of the returned array and/or assign it to a variable to echo the normal way?

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Why people don't patch and upgrade?!?

    - by Mike Dietrich
    Discussing the topic "Why Upgrade" or "Why not Upgrade" is not always fun. Actually the arguments repeat from customer to customer. Typically we hear things such as: A PSU or Patch Set introduces new bugs A new PSU or Patch Set introduces new features which lead to risk and require application verification  Patching means risk Patching changes the execution plans Patching requires too much testing Patching is too much work for our DBAs Patching costs a lot of money and doesn't pay out And to be very honest sometimes it's hard for me to stay calm in such discussions. Let's discuss some of these points a bit more in detail. A PSU or Patch Set introduces new bugsWell, yes, that is true as no software containing more than some lines of code is bug free. This applies to Oracle's code as well as too any application or operating system code. But first of all, does that mean you never patch your OS because the patch may introduce new flaws? And second, what is the point of saying "it introduces new bugs"? Does that mean you will never get rid of the mean issues we know about and we fixed already? Scroll down from MOS Note:161818.1 to the patch release you are on, no matter if it's 10.2.0.4 or 11.2.0.3 and check for the Known Issues And Alerts.Will you take responsibility to know about all these issues and refuse to upgrade to 11.2.0.4? I won't. A new PSU or Patch Set introduces new featuresOk, we can discuss that. Offering new functionality within a database patch set is a dubious thing. It has advantages such as in 11.2.0.4 where we backported Database Redaction to. But this is something you will only use once you have an Advanced Security license. I interpret that statement I've heard quite often from customers in a different way: People don't want to get surprises such as new behaviour. This certainly gives everybody a hard time. And we've had many examples in the past (SESSION_CACHED_CURSROS in 10.2.0.4,  _DATAFILE_WRITE_ERRORS_CRASH_INSTANCE in 11.2.0.2 and others) where those things weren't documented, not even in the README. Thanks to many friends out there I learned about those as well. So new behaviour is the topic people consider as risky - not really new features. And just to point this out: A PSU never brings in new features or new behaviour by definition! Patching means riskDoes it really mean risk? Yes, there were issues in the past (and sometimes in the present as well) where a patch didn't get installed correctly. But personally I consider it way more risky to not patch. Keep that in mind: The day Oracle publishes an PSU (or CPU) containing security fixes all the great security experts out there go public with their findings as well. So from that day on even my grandma can find out about those issues and try to attack somebody. Now a lot of people say: "My database does not face the internet." And I will answer: "The enemy is sitting already behind your firewalls. And knows potentially about these things." My statement: Not patching introduces way more risk to your environment than patching. Seriously! Patching changes the execution plansDo they really? I agree - there's a very small risk for this happening with Patch Sets. But not with PSUs or CPUs as they contain no optimizer fixes changing behaviour (but they may contain fixes curing wrong-query-result-bugs). But what's the point of a changing execution plan? In Oracle Database 11g it is so simple to be prepared. SQL Plan Management is a free EE feature - so once that occurs you'll put the plan into the Plan Baseline. Basta! Yes, you wouldn't like to get such surprises? Than please use the SQL Performance Analyzer (SPA) from Real Application Testing and you'll detect that easily upfront in minutes. And not to forget this, a plan change can also be very positive!Yes, there's a little risk with a database patchset - and we have many possibilites to detect this before patching. Patching requires too much testingWell, does it really? I have seen in the past 12 years how people test. There are very different efforts and approaches on this. I have seen people spending a hell of money on licenses or on project team staffing. And I have seen people sailing blindly without any tests just going the John-Wayne-approach.Proper tools will allow you to test easily without too much efforts. See the paragraph above. We have used Real Application Testing in so many customer projects reducing the amount of work spend on testing by over 50%. But apart from that at some point you will have to stop testing. If you don't you'll get lost and you'll burn money. There's no 100% guaranty. You will have to deal with a little risk as reaching the final 5% of certainty will cost you the same as it did cost to reach 95%. And doing this will lead to abnormal long product cycles that you'll run behind forever. And this will cost even more money. Patching is too much work for our DBAsPatching is a lot of work. I agree. And it's no fun work. It's boring, annoying. You don't learn much from that. That's why you should try to automate this task. Use the Database's Lifecycle Management Pack. And don't cry about the fact that it costs money. Yes it does. But it will ease the process and you'll save a lot of costs as you don't waste your valuable time with patching. Or use Oracle Database 12c Oracle Multitenant and patch either by unplug/plug or patch an entire container database with all PDBs with one patch in one task. We have customer reference cases proofing it saved them 75% of time, effort and cost since they've used Lifecycle Management Pack. So why don't you use it? Patching costs a lot of money and doesn't pay outWell, see my statements in the paragraph above. And it pays out as flying with a database with 100 known critical flaws in it which are already fixed by Oracle (such as in the Oct 2013 PSU for Oracle Database 12c) will cost ways more in case of failure or even data loss. Bet with me? Let me finally ask you some questions. What cell phone are you using and which OS does it run? Do you have an iPhone 5 and did you upgrade already to iOS 7.0.3? I've just encountered on mine that the alarm (which I rely on when traveling) has gotten now a dependency on the physical switch "sound on/off". If it is switched to "off" physically the alarm rings "silently". What a wonderful example of a behaviour change coming in with a patch set. Will this push you to stay with iOS5 or iOS6? No, because those have security flaws which won't be fixed anymore. What browser are you surfing with? Do you use Mozilla 3.6? Well, congratulations to all the hackers. It will be easy for them to attack you and harm your system. I'd guess you have the auto updater on.  Same for Google Chrome, Safari, IE. Right? -Mike The T.htmtableborders, .htmtableborders td, .htmtableborders th {border : 1px dashed lightgrey ! important;} html, body { border: 0px; } body { background-color: #ffffff; } img, hr { cursor: default }

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Validating textboxes and checkboxes then add the values of those checkboxes

    - by TiTi Nguyen
    I am very new to Javascript. I am running to a problem and don't know how to solve it. Could you please help? Basically, I want to create some textboxes and checkboxes in a form. Then I have to validate those fields, and add the values of the checkboxes if they are selected. One of the textboxes asking for how many semesters attended, and 3 checkboxes with value of 100, 1000, and 750. Whichever checkbox is selected, it should multiply its value to the number of semesters attended. For example if the first two checkboxes are selected then totalCost = (100+1000)* semester. Here is my code: User Name: <label>User Address: <input type = "text" id ="address" size = "30"/></label> <br/><br/> <label> User E-mail address: <input type = "text" id ="email" size = "30"/></label> <br/><br/> <label> User Phone number: <input type = "text" id ="phone" size = "30"/></label> <br/><br/> <label> User area code: <input type = "text" id ="area" size = "30"/></label> <br/><br/> <label> User SSN: <input type = "text" id ="ssn" size = "30"/></label> <br/><br/> <label> User Birthday: <input type = "text" id ="birthday" size = "30"/></label> <br/><br/> <label> Number of semester attended: <input type = "text" id ="semester" size = "3"/></label> <br/><br/> <label><input type="checkbox" id="box_book" value="100"/>Books $100 per semester</label> <br/> <label><input type="checkbox" id="box_tuition" value="1000"/>Tuition $1000 per semester</label> <br/> <label><input type="checkbox" id="box_room" value="750"/>Room and Board $750 per semester</label> <br/> <input type="reset" id="reset"/> <input type="submit" id="submit" onclick="checking()"/> <p/> </form> function checking() { var name=document.forms["myForm"]["name"].value; var address=document.forms["myForm"]["address"].value; var email=document.forms["myForm"]["email"].value; var atpos=email.indexOf("@"); var dotpos=email.lastIndexOf("."); var phone=document.forms["myForm"]["phone"].value; var area=document.forms["myForm"]["area"].value; var ssn=document.forms["myForm"]["ssn"].value; var birth=document.forms["myForm"]["birthday"].value; var semester=document.forms["myForm"]["semester"].value; var boxBook = document.forms["myForm"]["box_book"].value; var boxTuition = document.forms["myForm"]["box_tuition"].value; var boxRoom = document.forms["myForm"]["box_room"].value; if (name==null || name=="") { alert("Please fill in your name."); return false; } if (address==null || address=="") { alert("Please fill in your address."); return false; } if (atpos<1 || dotpos<atpos+2 || dotpos+2>=email.length) { alert("The email (" + email + ") is not a valid e-mail address. Please reenter your email address."); return false; } if(phone.length!=10) { alert("Phone number entered in incorrect form. Please reenter phone number in the correct form which contains 10 numbers."); return false; } if (area==null || area=="") { alert("Please fill in the area code"); return false; } if(ssn.length!=9) { alert("SSN entered in incorrect form. Please reenter SSN."); return false; } if (birth==null || birth=="") { alert("Please fill in your date of birth."); return false; } if (semester==null || semester=="") { alert("How many semester have you attended?"); return false; } if (document.getElementById("box_book").checked == false && document.getElementById("box_tuition").checked == false && document.getElementById("box_room").checked == false) { alert("You must select one of the checkboxes"); return false; } if (document.getElementById("box_book").checked ==true) { var subcost = boxBook; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_tuition").checked == true) { var subcost = boxBook + boxTuition; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_tuition").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxBook + boxTuition + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_tuition").checked ==true) { var subcost = boxTuition; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_tuition").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxTuition + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_room").checked ==true) { var subcost = boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } if (document.getElementById("box_book").checked == true && document.getElementById("box_room").checked == true) { var subcost = boxBook + boxRoom; var totalcost = subcost * semester; alert ("Your total cost is: $" + totalcost); } else return false; } When I hit the submit button, nothing happens!! Please help.

    Read the article

  • Spanning-tree setup with incompatible switches

    - by wfaulk
    I have a set of eight HP ProCurve 2910al-48G Ethernet switches at my datacenter that are set up in a star topology with no physical loops. I want to partially mesh the switches for redundancy and manage the loops with a spanning-tree protocol. However, our connection to the datacenter is provided by two uplinks, each to a Cisco 3750. The datacenter's switches are handling the redundant connection using PVST spanning-tree, which is a Cisco-proprietary spanning-tree implementation that my HP switches do not support. It appears that my switches are not participating in the datacenter's spanning-tree domain, but are blindly passing the BPDUs between the two switchports on my side, which enables the datacenter's switches to recognize the loop and put one of the uplinks into the Blocking state. This is somewhat supposition, but I can confirm that, while my switches say that both of the uplink ports are forwarding, only one is passing any real quantity of data. (I am assuming that I cannot get the datacenter to move away from PVST. I don't know that I'd want them to make that significant of a change anyway.) The datacenter has also sent me this output from their switches (which I have expurgated of any identifiable info): 3750G-1#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 5 (GigabitEthernet1/0/5) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 0018.73d3.yyyy Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/5 Root FWD 4 128.5 P2p Gi1/0/6 Altn BLK 4 128.6 P2p Gi1/0/8 Altn BLK 4 128.8 P2p and: 3750G-2#sh spanning-tree vlan nnn VLAN0nnn Spanning tree enabled protocol ieee Root ID Priority 10 Address 00d0.0114.xxxx Cost 4 Port 6 (GigabitEthernet1/0/6) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32mmm (priority 32768 sys-id-ext nnn) Address 000f.f71e.zzzz Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi1/0/1 Desg FWD 4 128.1 P2p Gi1/0/5 Altn BLK 4 128.5 P2p Gi1/0/6 Root FWD 4 128.6 P2p Gi1/0/8 Desg FWD 4 128.8 P2p The uplinks to my switches are on Gi1/0/8 on both of their switches. The uplink ports are configured with a single tagged VLAN. I am also using a number of other tagged VLANs in my switch infrastructure. And, to be clear, I am passing the tagged VLAN I'm receiving from the datacenter to other ports on other switches in my infrastructure. My question is: how do I configure my switches so that I can use a spanning tree protocol inside my switch infrastructure without breaking the datacenter's spanning tree that I cannot participate in?

    Read the article

  • Recommend a free/cheap CRM system [closed]

    - by Dan Hedley
    I am part of a 4 person volunteer team who manage a small housing development in London. We need a low-cost/no-cost contact management and issue tracking system. Specifically, it needs to be: -Web-based, or easily shared between 4 people working out of their homes -Easy to backup and restore -Decently secure Does anyone have any recommendations? I am reasonably technically literate, so a PHP-based solution running on a cheap hosting package would definitely be a viable option. Many thanks.

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • How much money can I save from installing a 80 plus bronze or gold PSU? [closed]

    - by David
    Currently I've only a 300 watt PSU and my pc is working like a charm but with all the components it should use the psu to the max. Recently I've read about 80 plus certificate and I'm wondering if it's worth to buy a 80 plus certificate psu? My power cost last year was also higher then before and my pc turns almost 14/7 a day. I mean power cost in used power not in augmented prices. I'm also planing to buy a SLI video card.

    Read the article

  • Can Amazon VMs be used as Active Directory domain controllers?

    - by mrdenny
    I've got a client who wants to move his companies servers off site. As he is only a 10 person company I'm looking for some pretty in-expensive options. One option is the smallest of the Amazon cloud machines. The question becomes can I make one of these machines a domain controller? Cost wise the Amazon machine is cheaper than the power costs of keeping a server (or a PC) up and running in his home office 24x7 thanks to the high cost of power in Southern California.

    Read the article

  • About memory cache of Linux

    - by cheneydeng
    I'm running a python script to do some statistics and the actually memory which used is low,about 10%.And no other process cost more memory.However,when i use free -m and it shows that almost 95% memory has been used.The point is that my script should do a lot of read from files,so i wonder if there's any mechanism of Linux memory cache that caused the problem?echo 1 >> /proc/sys/vm/drop_caches works,but it seems manually.How can i reduce the memory cost and doesn't make a bad effect on reading files?

    Read the article

  • Traveling Salesman in polynomial time

    - by Andres
    Problem Description: Write a program in any language with the shortest number of characters that solves the following problem: Given a collection of cities and the cost of travel between each pair of them, find the cheapest way of visiting all of the cities and returning to your starting point. All solutions must take at most polynomial time. Input will be in the form of a text file named simply "i". The text file will contain data in the following format: city1# city2# cost$ cityA# cityB# cost$ ... Each element in the file is separated with a space, and there is a newline at the end of every line. Code count does not include whitespace. Solutions that take longer than polynomial time to find will not be accepted

    Read the article

  • MongoMapper - undefined method `keys'

    - by nimnull
    I'm trying to create a Document instance with params passed from the post-submitted form: My Mongo mapped document looks like: class Good include MongoMapper::Document key :title, String key :cost, Float key :description, String timestamps! many :attributes validates_presence_of :title, :cost end And create action: def create @good = Good.new(params[:good]) if @good.save redirect_to @good else render :new end end params[:good] containes all valid document attributes - {"good"={"cost"="2.30", "title"="Test good", "description"="Test description"}}, but I've got a strange error from rails: undefined method `keys' for ["title", "Test good"]:Array My gem list: *** LOCAL GEMS *** actionmailer (2.3.8) actionpack (2.3.8) activerecord (2.3.8) activeresource (2.3.8) activesupport (2.3.8) authlogic (2.1.4) bson (1.0) bson_ext (1.0) compass (0.10.1) default_value_for (0.1.0) haml (3.0.6) jnunemaker-validatable (1.8.4) mongo (1.0) mongo_ext (0.19.3) mongo_mapper (0.7.6) plucky (0.1.1) rack (1.1.0) rails (2.3.8) rake (0.8.7) rubygems-update (1.3.7) Any suggestions how to fix this error?

    Read the article

  • Can I get an example please?

    - by Doug
    $starcraft = array( "drone" => array( "cost" => "6_0-", "gas" => "192", "minerals" => "33", "attack" => "123", ) "zealot" => array( "cost" => "5_0-", "gas" => "112", "minerals" => "21", "attack" => "321", ) ) I'm playing with oop and I want to display the information in this array using a class, but I don't know how to construct the class to display it. This is what I have so far, and I don't know where to go from here. Am I supposed to use setters and getters? class gamesInfo($game) { $unitname; $cost; $gas; $minerals; $attack; }

    Read the article

  • What is a reasonly priced map API solution for a startup?

    - by Kevin
    I've been developing my application with Google Maps and the wonderful rails plugin for it, expecting to find that when I put my app into production that the commercial licensing wouldn't be too expensive. Then I found out it cost $10,000/year, no exceptions so far. http://www.47hats.com/2009/07/google-maps-the-10k-gotcha/ That's not a terrible price to pay for unlimited usage when your site becomes successful, but for those of us trying to build something from the ground up, that's a hefty price to pay. I've looked at Bing and Yahoo but they're very wishy-washy with what ballpark the pricing is. That on top of the fact I have to ditch my nice rails plugin YM4R for Google maps... Is anyone out there using a map API solution that doesn't cost an arm and a leg to get started with in a commercial aspect? I don't mind not using a plugin, I just need something that will work and is cost affordable in the beginning.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >