Search Results

Search found 3701 results on 149 pages for 'cost threshold for parall'.

Page 22/149 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Primavera ???????·??????

    - by hhata
    Primavera???????????????????? 1. Primavera P6 EPPM (Enterprise Project Portfolio Management) P6 EPPM??????????????????????????????????????????? ??????????????????????????????????? P6 EPPM ???????????? 2. Primavera Cost Controls Primavera Cost Controls???????????????????????? ????????????????????????? Primavera Cost Controls ???????????? 3. Primavera Project Delivery Management Primavera Project Delivery Management????????????????????????? ????????????????????????????????????????? Primavera Project Delivery Management ???????????? 4. Primavera Capital Planning Primavera Capital Planning???????????????????? ???????????·?????????????????????????????????? ???????????????????????????????? Primavera Capital Planning ???????????? 5. Oracle Instantis EnterpriseTrack Instantis EnterpriseTrack??IT???????????????????? (PMO)?????????????????????????????????·??????? ?·??????????(PPM)?????????????????????????????? ???????????????????????????????? Instantis EnterpriseTrack ????????????? ????????????????????Primavera ????·?????????·???????:??????????????? ?? : 03-6834?5241 (??:??) ??????????????????????

    Read the article

  • C: copying some structs causes strange behavior

    - by Jenny B
    I have an annoying bug in the line rq->tickets[rq->usedContracts] = toAdd; if i do: rq->tickets[0] = toAdd the program crashes if i do rq->tickets[1] = toAdd; it works valgrind says ==19501== Use of uninitialised value of size 8 and ==19501== Invalid write of size 8 for this very line. What is wrong? struct TS_element { int travels; int originalTravels; int cost; char** dates; int moneyLeft; } TicketSet; struct RQ_element { int usedContracts; struct TS_element* tickets; } RabQav; TicketSetStatus tsCreate(TicketSet* t, int n, int c) { if (n <= 0) return TS_ILLEGAL_PARAMETER; TicketSet* myTicketSet = (TicketSet*) malloc(sizeof(TicketSet)); if (myTicketSet == NULL) { return TS_CANNOT_CREATE; } myTicketSet->usedTravels = 0; myTicketSet->originalTravels = n; myTicketSet->cost = c; myTicketSet->moneyLeft = n * c; char** dates = malloc(sizeof(char**)* (n)); //todo maybe c99 allows dynamic arrays? for (int i = 0; i < n; i++) { dates[i] = malloc(sizeof(char)*GOOD_LENGTH+1); if (dates[i] == NULL) { free(dates); free(t); return TS_CANNOT_CREATE; } } myTicketSet->dates = dates; *t = *myTicketSet; return TS_SUCCESS; } static void copyTicketSets(TicketSet* dest, const TicketSet* source) { dest->usedTravels = source->usedTravels; dest->originalTravels = source->originalTravels; dest->cost = source->cost; dest->moneyLeft = source->moneyLeft; for (int i = 0; i < source->originalTravels; i++) { if (NULL != source->dates[i]) { free(dest->dates[i]); dest->dates[i] = malloc(sizeof(char) * GOOD_LENGTH + 1); if (dest->dates[i] == NULL) { free(dest->dates); //todo free dates 0...i-1 free(dest); return; } strcpy(dest->dates[i], source->dates[i]); } } } RabQavStatus rqLoadTS(RabQav* rq, TicketSet t, DateTime dt) { TicketSet toAdd; TicketSetStatus res = tsCreate(&toAdd, t.originalTravels, t.cost); if (res != TS_SUCCESS) { return RQ_FAIL; } copyTicketSets(&toAdd, &t); rq->tickets[rq->usedContracts] = toAdd; rq->usedContracts++; return RQ_SUCCESS; }

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • The battery indicator& Power setting panel shows wrong battery state

    - by Eastsun
    My laptop is Thinkpad E420 with Ubuntu 12.04 64-bit installed, the kernel version is 3.2.0-33-generic. I have set the battery threshold as 60% via windows7. It seems that the threshold auto effected in Ubuntu. However, there are some problems of the battery indicator's state. I'll list some information of the battery state as following: (Note that in terminal ubuntu says that battery charging state is charged, while the power setting panel shows that the battery state is charging as well as the battery indicator shows.) $ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok *charging state: charged* present rate: 0 mW remaining capacity: 18200 mWh present voltage: 16103 mV battery indicator state Power Setting Panel Is there any way to fix the problem? Edit Add some result via *sudo fwts battery - battery.log * 3 passed, 4 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. Test Failure Summary =============================== Critical failures: NONE High failures: 2 battery: Did not detect any ACPI battery events. battery: Could not detect ACPI events for battery BAT0. Medium failures: 1 battery: Battery BAT0 claims it's charging but no charge is added Low failures: 1 battery: System firmware may not support cycle count interface or it reports it incorrectly for battery BAT0. Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ battery | 3| 4| | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 4| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Any help would be appreciated!

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • SQL SERVER – Relationship with Parallelism with Locks and Query Wait – Question for You

    - by Pinal Dave
    Today, I have one very simple question based on following image. A full disclaimer is that I have no idea why it is like that. I tried to reach out to few of my friends who know a lot about SQL Server but no one has any answer. Here is the question: If you go to server properties and click on Advanced you will see the following screen. Under the Parallelism section if you noticed there are four options: Cost Threshold for Parallelism Locks Max Degree of Parallelism Query Wait I can clearly understand why Cost Threshold for Parallelism and Max Degree of Parallelism belongs to Parallelism but I am not sure why we have two other options Locks and Query Wait belongs to Parallelism section. I can see that the options are ordered alphabetically but I do not understand the reason for locks and query wait to list under Parallelism. Here is the question for you – Why Locks and Query Wait options are listed under Parallelism section in SQL Server Advanced Properties? Please leave a comment with your explanation. I will publish valid answers on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Disk failure is imminent Laptop Hard drive ~5 months old

    - by Drew
    There's another post about this, but I don't have enough 'points' to say anything on that thread. So I'll start my own ... with more details! My computer still boots, but gnome domain reports problems with HDD smart. This has been confirmed in the bios as it makes me press f1 to boot up now. I tried running HDD disk check in the bios, but it fails running the tests. As in, running the tests failed not that the tests themselves indicated a failed drive. Here is what disk utility is reporting as failing: Reallocated Sector Count FAILING Normalized: 132 Worst: 132 Threshold: 140 Value: 544 Current Pending Sector Count WARNING Normalized: 200 Worst: 1 Threshold: 0 Value: 2 Is this related to the insane number of DRDY errors on the drive? kernel: [51345.233069] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51345.233076] ata1.00: BMDMA stat 0x4 kernel: [51345.233081] ata1.00: failed command: READ DMA kernel: [51345.233090] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51345.233092] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51345.233097] ata1.00: status: { DRDY ERR } kernel: [51345.233103] ata1.00: error: { UNC } kernel: [51345.291929] ata1.00: configured for UDMA/100 kernel: [51345.291944] ata1: EH complete kernel: [51347.682748] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 kernel: [51347.682754] ata1.00: BMDMA stat 0x4 kernel: [51347.682759] ata1.00: failed command: READ DMA kernel: [51347.682768] ata1.00: cmd c8/00:00:00:8b:4a/00:00:00:00:00/e0 tag 0 dma 131072 in kernel: [51347.682770] res 51/40:00:a8:8b:4a/10:04:00:00:00/e0 Emask 0x9 (media error) kernel: [51347.682774] ata1.00: status: { DRDY ERR } kernel: [51347.682777] ata1.00: error: { UNC } Did Ubuntu 10.10 and/or EXT4 eat my work laptop? What steps can I take to backup my important information, which is probably the home folder. Please include steps to recover my data on the new hard drive as well. It does me little good to have backups I can't use.

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Wikipedia A* pathfinding algorithm takes a lot of time

    - by Vee
    I've successfully implemented A* pathfinding in C# but it is very slow, and I don't understand why. I even tried not sorting the openNodes list but it's still the same. The map is 80x80, and there are 10-11 nodes. I took the pseudocode from here Wikipedia And this is my implementation: public static List<PGNode> Pathfind(PGMap mMap, PGNode mStart, PGNode mEnd) { mMap.ClearNodes(); mMap.GetTile(mStart.X, mStart.Y).Value = 0; mMap.GetTile(mEnd.X, mEnd.Y).Value = 0; List<PGNode> openNodes = new List<PGNode>(); List<PGNode> closedNodes = new List<PGNode>(); List<PGNode> solutionNodes = new List<PGNode>(); mStart.G = 0; mStart.H = GetManhattanHeuristic(mStart, mEnd); solutionNodes.Add(mStart); solutionNodes.Add(mEnd); openNodes.Add(mStart); // 1) Add the starting square (or node) to the open list. while (openNodes.Count > 0) // 2) Repeat the following: { openNodes.Sort((p1, p2) => p1.F.CompareTo(p2.F)); PGNode current = openNodes[0]; // a) We refer to this as the current square.) if (current == mEnd) { while (current != null) { solutionNodes.Add(current); current = current.Parent; } return solutionNodes; } openNodes.Remove(current); closedNodes.Add(current); // b) Switch it to the closed list. List<PGNode> neighborNodes = current.GetNeighborNodes(); double cost = 0; bool isCostBetter = false; for (int i = 0; i < neighborNodes.Count; i++) { PGNode neighbor = neighborNodes[i]; cost = current.G + 10; isCostBetter = false; if (neighbor.Passable == false || closedNodes.Contains(neighbor)) continue; // If it is not walkable or if it is on the closed list, ignore it. if (openNodes.Contains(neighbor) == false) { openNodes.Add(neighbor); // If it isn’t on the open list, add it to the open list. isCostBetter = true; } else if (cost < neighbor.G) { isCostBetter = true; } if (isCostBetter) { neighbor.Parent = current; // Make the current square the parent of this square. neighbor.G = cost; neighbor.H = GetManhattanHeuristic(current, neighbor); } } } return null; } Here's the heuristic I'm using: private static double GetManhattanHeuristic(PGNode mStart, PGNode mEnd) { return Math.Abs(mStart.X - mEnd.X) + Math.Abs(mStart.Y - mEnd.Y); } What am I doing wrong? It's an entire day I keep looking at the same code.

    Read the article

  • Convert color photos of documents to good black-and-white images?

    - by Norman Ramsey
    Since I don't have a copier or scanner, I'm using an 8 megapixel camera to copy documents. This works pretty well except they need a lot of processing afterward. I'd like to get from a photo to a bitmap, but using djpeg -grayscale -pnm photo.jpg | pgmtopbm -threshold -value XXX does not work so well, for two reasons: It's hard to guess what XXX should be, and XXX is different for different photos. Illumination varies, and sometimes a single threshold isn't what's right for the image. How can I do better? The ideal solution will be fully automatic command-line program that I can run on Linux. (I have already written a program to remove dark pixels from the edges of images.)

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • OpenGL ES rotate texture

    - by 0xSina
    I just got started with OpenGL ES... I have a fragment: const char * sFragment = _STRINGIFY( varying highp vec2 coordinate; precision mediump float; uniform vec4 maskC; uniform float threshold; uniform sampler2D videoframe; uniform sampler2D videosprite; uniform vec4 mask; uniform vec4 maskB; uniform int recording; vec3 normalize(vec3 color, float meanr) { return color*vec3(0.75 + meanr, 1., 1. - meanr); } void main() { float d; float dB; float dC; float meanr; float meanrB; float meanrC; float minD; vec4 pixelColor; vec4 spriteColor; pixelColor = texture2D(videoframe, coordinate); spriteColor = texture2D(videosprite, coordinate); meanr = (pixelColor.r + mask.r)/8.; meanrB = (pixelColor.r + maskB.r)/8.; meanrC = (pixelColor.r + maskC.r)/8.; d = distance(normalize(pixelColor.rgb, meanr), normalize(mask.rgb, meanr)); dB = distance(normalize(pixelColor.rgb, meanrB), normalize(maskB.rgb, meanrB)); dC = distance(normalize(pixelColor.rgb, meanrC), normalize(maskC.rgb, meanrC)); minD = min(d, dB); minD = min(minD, dC); gl_FragColor = spriteColor; if (minD > threshold) { gl_FragColor = pixelColor; } } Now, depending on wether recording is 0 or 1, I want to rotate uniform sampler2D videosprite 180 degrees (reflection in x-axis, flip vertically). How can I do that? I found the function glRotatef(), but how do i specify that I want to rotate video sprite and not videoframe? Thanks

    Read the article

  • Brightness too dark or too bright

    - by YatharthROCK
    Basically, below a seemingly-arbitrary threshold (seems to be around 60%) my laptop screen is too dark to really make out much, and above that threshold it takes its maximum brightness (which is waay too bright for my eyes). This problem only occurs when I change the brightness using Windows' UI. Changing the brightness using Fn keys on my keyboard works as expected; unless the screen went into dark mode because of Windows, in which case they have no effect. This problem started occuring on my ASUS laptop very recently. My laptop has a poor battery, but I don't think that's related. What is going on? I tried resetting power plan settings, turning off adaptive brightness and restarting. I'll give any info asked for. Help required urgently...

    Read the article

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Is 50% download speed on a wireless G network normal?

    - by Bartlomiej Skwira
    I have a wired connection of about 36Mb/s, but my wireless speed is max at about 18-19Mb/s. I have a WRT54G-TM (T-Mobile, 802.11G) router with DD-WRT firmware - I've upgraded it to latest build. Done some settings changes: changed channel - 13 wireless network mode - G-only ACK Timing - 0 Fragmentation Threshold and RTS Threshold - 2304 Basic Rate - All Signal/Noise ratio: -46/-94, signal quality ~50-60%. Is this normal with G networks? Edit: The AP is located about 2 meters from laptop, no walls or metal objects, but its next to a TV. I've done a channel scan (had problems locating it, go to "Status - Wireless - Site survey" - lame naming) and everybody else is on channels 1 and 6. Switched to channel 11 but it didn't help. As for trasmit power I got best results with default 71mw. The antenna might be a factor, I'm using the default 2 antennas.

    Read the article

  • Pass data from one form to another on a seperate page

    - by Micanio
    I am building a price/distance calculator with Google Maps API and am trying to pass the info from the calculator to a booking form on a separate page. My first form has 2 submit buttons - one to make the calculation, and one to submit the relevant data to the booking form. I'm stuck trying to make the 2nd button work. Once the API calculation has been made, I get 4 values - From, To, Cost, Distance. I am trying to pass the From, To and Cost values into my booking form by clicking the second button. But I can;t seem to get it to work. I've tried POST and GET but I think I may have been doing something wrong with both. Any help is appreciated. Code for API form: <script type="text/javascript" src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=ABQIAAAAwCUxKrPl8_9WadET5dc4KxTqOwVK5HCwTKtW27PjzpqojXnJORQ2kUsdCksByD4hzcGXiOxvn6C4cw&sensor=true"></script> <script type="text/javascript"> var geocoder = null; var location1 = null; var location2 = null; var gDir = null; var directions = null; var total = 0; function roundNumber(num, dec) { var result = Math.floor(num*Math.pow(10 ,dec))/Math.pow(10,dec); return result; } function from(form) { address1=form.start.options[form.start.selectedIndex].value form.address1.value=address1 form.address1.focus() } function to(form) { address2=form.end.options[form.end.selectedIndex].value form.address2.value=address2 form.address2.focus() } function initialize() { var map = new GMap2(document.getElementById("map_canvas")); map.setCenter(new GLatLng(54.019066,-1.381531),9); map.setMapType(G_NORMAL_MAP); geocoder = new GClientGeocoder(); gDir = new GDirections(map); GEvent.addListener(gDir, "load", function() { var drivingDistanceMiles = gDir.getDistance().meters / 1609.344; var drivingDistanceKilometers = gDir.getDistance().meters / 1000; var miles = drivingDistanceMiles.toFixed(0); //var cost = (((miles - 1) * 1.9) + 3.6).toFixed(2); var meters = gDir.getDistance().meters.toFixed(1); if(miles < 70){ var cost = miles *1.75; } if(miles >70){ var cost = miles *1.2; } document.getElementById('from').innerHTML = '<strong>From: </strong>' + location1.address; document.getElementById('to').innerHTML = '<strong>To: </strong>' + location2.address; document.getElementById('cost').innerHTML = '<span class="fare"><strong>Estimated Taxi FARE:</strong>' + ' £' + cost.toFixed(2) + '</span>'; document.getElementById('miles').innerHTML = '<strong>Distance: </strong>' + miles + ' Miles'; }); } function showLocation() // start of possible values for address not recognized on google search // values for address1 { if (document.forms[0].address1.value == "heathrow" || document.forms[0].address1.value == "Heathrow" || document.forms[0].address1.value == "heathrow airport" || document.forms[0].address1.value == "Heathrow Airport" || document.forms[0].address1.value == "London Heathrow" || document.forms[0].address1.value =="london heathrow" ) { (document.forms[0].address1.value = "Heathrow Airport"); } if (document.forms[0].address2.value == "heathrow" || document.forms[0].address2.value == "Heathrow" || document.forms[0].address2.value == "heathrow airport" || document.forms[0].address2.value == "Heathrow Airport" || document.forms[0].address2.value == "London Heathrow" || document.forms[0].address2.value =="london heathrow" ) { (document.forms[0].address2.value = "Heathrow Airport"); } geocoder.getLocations(document.forms[0].address1.value + document.forms[0].uk.value || document.forms[0].start.value + document.forms[0].uk.value, function (response) { if (!response || response.Status.code != 200) { alert("Sorry, we were unable to find the first address"); } else { location1 = {lat: response.Placemark[0].Point.coordinates[1], lon: response.Placemark[0].Point.coordinates[0], address: response.Placemark[0].address}; geocoder.getLocations(document.forms[0].address2.value + document.forms[0].uk.value, function (response) { if (!response || response.Status.code != 200) { alert("Sorry, we were unable to find the second address"); } else { location2 = {lat: response.Placemark[0].Point.coordinates[1], lon: response.Placemark[0].Point.coordinates[0], address: response.Placemark[0].address}; gDir.load('from: ' + location1.address + ' to: ' + location2.address); } }); } }); } </script> <style> #quote { font-family: Georgia, "Times New Roman", Times, serif; } </style> </head> <body style="background-color: rgb(255, 255, 255);" onUnload="GUnload()" onLoad="initialize()"> <div id="sidebar"> <!--MAPS--> <div id="calc_top"></div> <div id="calc_body"> <div id="calc_inside"> <span style="font-size: 16px; font-weight: bold;">Get A Quote Now</span> <p class="disclaimer">Fares can be calculated using either Area, Exact Address or Postcode, when entering address please include both road name and area i.e. <em>Harrogate Road, Ripon</em>. You can also select a pickup point and destination from the dropdown boxes. </p> <form onSubmit="showLocation(); return false;" action="#" id="booking_form"> <p> <select onChange="from(this.form)" name="start"> <option selected="selected">Select a Pickup Point</option> <option value="Leeds Bradford Airport">Leeds Bradford Airport</option> <option value="Manchester Airport">Manchester Airport</option> <option value="Teesside International Airport">Teeside Airport</option> <option value="Liverpool John Lennon Airport">Liverpool Airport</option> <option value="East Midlands Airport">East Midlands Airport</option> <option value="Heathrow International Airport">Heathrow Airport</option> <option value="Gatwick Airport">Gatwick Airport</option> <option value="Stanstead Airport">Stanstead Airport</option> <option value="Luton International Airport">Luton Airport</option> </select> </p> <p> <input type="text" value="From" name="address1"><br> <p> <select onChange="to(this.form)" name="end"> <option selected="selected">Select a Destination</option> <option value="Leeds Bradford Airport">Leeds Bradford Airport</option> <option value="Manchester Airport">Manchester Airport</option> <option value="Teesside International Airport">Teeside Airport</option> <option value="Liverpool John Lennon Airport">Liverpool Airport</option> <option value="East Midlands Airport">East Midlands Airport</option> <option value="Heathrow International Airport">Heathrow Airport</option> <option value="Gatwick Airport">Gatwick Airport</option> <option value="Stanstead Airport">Stanstead Airport</option> <option value="Luton International Airport">Luton Airport</option> </select> </p> <input type="text" value="To" name="address2"><br> <input type="hidden" value=" uk" name="uk"> <br> <input type="submit" value="Get Quote"> <input type="button" value="Reset" onClick="resetpage()"><br /><br /> <input type="submit" id="CBSubmit" value="Confirm and Book" action=""/> </p> </form> <p id="from"><strong>From:</strong></p> <p id="to"><strong>To:</strong></p> <p id="miles"><strong>Distance: </strong></p> <p id="cost"><span class="fare"><strong>Estimated Taxi FARE:</strong></span></p> <p id="results"></p> <div class="style4" style="width: 500px; height: 500px; position: relative; background-color: rgb(229, 227, 223);" id="map_canvas"></div> </div> </div> Code for Booking Form: <form method="post" action="contactengine.php" id="contact_form"> <p> <label for="Name" id="Name">Name:</label> <input type="text" name="Name" /> <label for="Email" id="Email">Email:</label> <input type="text" name="Email" /> <label for="tel" id="tel">Tel No:</label> <input type="text" name="tel" /><br /><br /> <label for="from" id="from">Pickup Point:</label> <input type="text" name="from" value="" /><br /><br /> <label for="to" id="to">Destination:</label> <input type="text" name="to" value=""/><br /> <label for="passengers" id="passengers">No. of passengers</label> <input type="text" name="passengers" /><br /><br /> <label for="quote" id="quote">Price of journey:</label> <input type="text" name="quote" value="" /><br /><br /> <label for="Message" id="Message">Any other info:</label> <textarea name="Message" rows="20" cols="40"></textarea> <br /> Are you an account holder?<br /> <label for="account" id="yes" /> Yes:</label> <input type="radio" class="radio" value="yes" name="account"> <label for="account" id="yes" /> No:</label> <input type="radio" class="radio" value="no" name="account"> </p> <small>Non-account holders will have to pay a £5 booking fee when confirming thier booking</small> <input type="submit" name="submit" value="Submit" class="submit-button" /> </p> </form> Thanks in advance

    Read the article

  • Parse JSON into a ListView friendly output

    - by Thomas McDonald
    So I have this JSON, which then my activity retrieves to a string: {"popular": {"authors_last_month": [ { "url":"http://activeden.net/user/OXYLUS", "item":"OXYLUS", "sales":"1148", "image":"http://s3.envato.com/files/15599.jpg" }, { "url":"http://activeden.net/user/digitalscience", "item":"digitalscience", "sales":"681", "image":"http://s3.envato.com/files/232005.jpg" } { ... } ], "items_last_week": [ { "cost":"4.00", "thumbnail":"http://s3.envato.com/files/227943.jpg", "url":"http://activeden.net/item/christmas-decoration-balls/75682", "sales":"43", "item":"Christmas Decoration Balls", "rating":"3", "id":"75682" }, { "cost":"30.00", "thumbnail":"http://s3.envato.com/files/226221.jpg", "url":"http://activeden.net/item/xml-flip-book-as3/63869", "sales":"27", "item":"XML Flip Book / AS3", "rating":"5", "id":"63869" }, { ... }], "items_last_three_months": [ { "cost":"5.00", "thumbnail":"http://s3.envato.com/files/195638.jpg", "url":"http://activeden.net/item/image-logo-shiner-effect/55085", "sales":"641", "item":"image logo shiner effect", "rating":"5", "id":"55085" }, { "cost":"15.00", "thumbnail":"http://s3.envato.com/files/180749.png", "url":"http://activeden.net/item/banner-rotator-with-auto-delay-time/22243", "sales":"533", "item":"BANNER ROTATOR with Auto Delay Time", "rating":"5", "id":"22243"}, { ... }] } } It can be accessed here as well, although it because it's quite a long string, I've trimmed the above down to display what is needed. Basically, I want to be able to access the items from "items_last_week" and create a list of them - originally my plan was to have the 'thumbnail' on the left with the 'item' next to it, but from playing around with the SDK today it appears too difficult or impossible to achieve this, so I would be more than happy with just having the 'item' data from 'items_last_week' in the list. Coming from php I'm struggling to use any of the JSON libraries which are available to Java, as it appears to be much more than a line of code which I will need to deserialize (I think that's the right word) the JSON, and they all appear to require some form of additional class, apart from the JSONArray/JSONObject script I have which doesn't like the fact that items_last_week is nested (again, I think that's the JSON terminology) and takes an awful long time to run on the Android emulator. So, in effect, I need a (preferably simple) way to pass the items_last_week data to a ListView. I understand I will need a custom adapter which I can probably get my head around but I cannot understand, no matter how much of the day I've just spent trying to figure it out, how to access certain parts of a JSON string..

    Read the article

  • Perl program for extracting the functions alone in a Ruby file

    - by thillaiselvan
    Hai all, I am having the following Ruby program. puts "hai" def mult(a,b) a * b end puts "hello" def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end AltimaCost, AltimaMpg = getCostAndMpg puts "AltimaCost = #{AltimaCost}, AltimaMpg = {AltimaMpg}" I have written a perl script which will extract the functions alone in a Ruby file as follows while (<DATA>){ print if ( /def/ .. /end/ ); } Here the <DATA> is reading from the ruby file. So perl prograam produces the following output. def mult(a,b) a * b end def getCostAndMpg cost = 30000 # some fancy db calls go here mpg = 30 return cost,mpg end But, if the function is having block of statements, say for example it is having an if condition testing block means then it is not working. It is taking only up to the "end" of "if" block. And it is not taking up to the "end" of the function. So kindly provide solutions for me. Example: def function if x > 2 puts "x is greater than 2" elsif x <= 2 and x!=0 puts "x is 1" else puts "I can't guess the number" end #----- My code parsing only up to this end Thanks in Advance!

    Read the article

  • How to eliminate duplicate rows?

    - by Odette
    hi guys im back with my original query and i just have one question please (ps: I know i have to vote and regsiter and I promise I will do that today) With the following query (t-sql) I am getting the correct results, except that there are duplicates now. I have been reading up and think I can use the PARTITION BY syntax - can you please show me how to incorporate the PARTITION BY syntax? WITH CALC1 AS (SELECT OTQUOT, OTIT01 AS ITEMS, ROUND(OQCQ01 * OVRC01,2) AS COST FROM @[email protected] WHERE OTIT01 < '' UNION ALL ... SELECT OTQUOT, OTIT10 AS ITEMS, ROUND(OQCQ10 * OVRC10,2) AS COST FROM @[email protected] WHERE OTIT10 < '' ) SELECT OTQUOT, DESC, ITEMS, RN FROM ( SELECT OTQUOT, ITEMS, B.IXRPGP AS GROUP, C.OTRDSC AS DESC, COST, ROW_NUMBER() OVER (PARTITION BY OTQUOT ORDER BY COST DESC) AS RN FROM CALC1 AS A INNER JOIN @[email protected] AS B ON (A.ITEMS = B.IKITMC) INNER JOIN DATAGRP.GDSGRP AS C ON (B.IXRPGP = C.OKRPGP) ) T RESULTS: 60408169 FENCING GNCPDCTP18BGBG 1 60408169 FENCING CGIFESHPD1795BG 2 60408169 FENCING GTTCGIBG 3 60408169 FENCING GBTCGIBG 4 How do I get rid of the duplicates? thanks Bill and all the others for your help (I am still learning!)

    Read the article

  • Need a workaround to filter on related model and aggregated fields in Django

    - by parxier
    I opened a ticket for this problem. In a nutshell here is my model: class Plan(models.Model): cap = models.IntegerField() class Phone(models.Model): plan = models.ForeignKey(Plan, related_name='phones') class Call(models.Model): phone = models.ForeignKey(Phone, related_name='calls') cost = models.IntegerField() I want to run a query like this one: Phone.objects.annotate(total_cost=Sum('calls__cost')).filter(total_cost__gte=0.5*F('plan__cap')) Unfortunately Django generates bad SQL: SELECT "app_phone"."id", "app_phone"."plan_id", SUM("app_call"."cost") AS "total_cost" FROM "app_phone" INNER JOIN "app_plan" ON ("app_phone"."plan_id" = "app_plan"."id") LEFT OUTER JOIN "app_call" ON ("app_phone"."id" = "app_call"."phone_id") GROUP BY "app_phone"."id", "app_phone"."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan"."cap" and errors with: ProgrammingError: column "app_plan.cap" must appear in the GROUP BY clause or be used in an aggregate function LINE 1: ...."plan_id" HAVING SUM("app_call"."cost") >= 0.5 * "app_plan".... Is there any workaround apart from running raw SQL?

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >