Search Results

Search found 35094 results on 1404 pages for 'post build'.

Page 779/1404 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • Jenkins shows same information for all projects

    - by SuperCabbage
    I've been using Jenkins for the last month or so and what started out as a small issue has gotten worse and worse. I have 10 projects in Jenkins, all polling from different Git repos and building to different environments but they all show the same details on the dashboard. I can still build the projects, but I have to manually enter the URL to see any console output etc. I'm running 1.536 under Ubuntu 12.04, there's not much in the logs other than the following; Oct 22, 2013 2:21:19 PM WARNING jenkins.model.lazy.AbstractLazyLoadRunMap search JENKINS-15652 Assertion error #1: failing to load /data/builds #20 EXACT: lo=23,hi=9,size=23,size2=23 – Any ideas?

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Powershell variables to string

    - by Mike Koerner
    I'm new to powershell. I'm trying to write an error handler to wrap around my script.  Part of the error handler is dumping out some variable settings.  I spent a while trying to do this and couldn't google a complete solution so I thought I'd post something. I want to display the $myinvocation variable. In powershell you can do this PS C:\> $myInvocation for my purpose I want to create a stringbuilder object and append the $myinvocation info.  I tried this $sbOut = new-object System.Text.Stringbuilder $sbOut.appendLine($myinvocation) $sbOut.ToString() This produces                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                          86                                 2147483647                                         45 System.Management.Automation.InvocationInfo This is not what I wanted so I tried $sbOut.appendLine(($myinvocation|format-list *)) This produced                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                         606                                 2147483647                                        305 Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Micros oft.PowerShell.Commands.Internal.Format.FormatEntryData Microsoft.PowerShell.Commands.Internal.Format.GroupEndData Microsoft.Powe rShell.Commands.Internal.Format.FormatEndData Finally I figured out how to produce what I wanted: $sbOut = new-object System.Text.Stringbuilder [void]$sbOut.appendLine(($myinvocation|out-string)) $sbOut.ToString() MyCommand        : $sbOut = new-object System.Text.Stringbuilder                                    [void]$sbOut.appendLine(($myinvocation|out-string))                                      $sbOut.ToString()                    BoundParameters  : {} UnboundArguments : {} ScriptLineNumber : 0 OffsetInLine     : 0 HistoryId        : 13 ScriptName       : Line             : PositionMessage  : InvocationName   : PipelineLength   : 2 PipelinePosition : 1 ExpectingInput   : False CommandOrigin    : Runspace Note the [void] in front of the stringbuilder variable doesn't show the Capacity,MaxCapacity of the stringbuilder object.  The pipe to out-string makes the output a string. It's not pretty but it works.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • chunked response in nginx not working

    - by Dean Hiller
    I ran into this post which shows my problem EXACTLY http://nginx.org/en/docs/faq/chunked_encoding_from_backend.html BUT browsers are using http 1.1 these days so I really don't understand. Our backend is the playframework and I don't mind fixing it but I don't really understand what is not working ESPECIALLY since firefox, safari, chrome ALL download the response just fine with no problems. ONLY when we stick NGINX in the middle do things break and we end up with extra data in our json responses. Any idea how to fix this? as the doc above just seems wrong since we are now on later versions of http PLUS the browsers seem to work just fine. thanks, Dean

    Read the article

  • Can thousands of backlinks from the same site harm PageRank?

    - by Dejan Pelzel
    I just noticed that one particular site has almost 7000 backlinks linking back to our website. The site is something like a news aggregator and for each post they created around 20 (sometimes much more) backlinks back to our page and they basically linked over 400 pages. I am beginning to get concerned that this amount of links might harm our page. They seem to have more backlinks to our page than all the other pages combined and more backlinks that our website has pages. We have seen a massive negative effect going on for quite a while and the PageRank seems to have dropped to None (Not even 0). But I am not sure when and why exactly that happened seeing that PageRank updates take quite a while to appear. The site linking to us is otherwise pretty reputable and doesn't seem to be having any problems with their rank. (PR 6 actually) I was thinking of using the Google disavow tool for this site, but I don't want to make things even worse. Do you think these are harmful? If so, how do I fix this? Thanks :)

    Read the article

  • Java EE 7 JSR Submitted

    - by Tori Wieldt
    Java EE 7 has been filed as JSR 342 in the JCP program. This JSR (Java Specification Request) will develop Java EE 7, the next version of the Java Platform, Enterprise Edition. It is an "umbrella JSR" because the specification includes a collection of several other JSRs. The proposal suggests the addition of two new JSRs: Concurrency Utilities for Java EE (JSR-236) and JCache (JSR-107) as well as updates to JPA, JAX-RS, JSF, Servlets, EJB, JSP, EL, JMS, JAX-WS, CDI, Bean Validation, JSR-330, JSR-250, and Java Connector Architecture. There are also two new APIs under discussion: a Java Web Sockets API and a Java JSON API. These are the new JSRs that are currently up for ballot:• JSR 342: Java Platform, Enterprise Edition 7 Specification• JSR 340: Java Servlet 3.1 Specification• JSR 341: Expression Language 3.0• JSR 343: Java Message Service 2.0• JSR 344: JavaServer Faces 2.2All 5 JSRs are now up for Executive Committee voting with ballots closing on 14 March, and slated for inclusion in Java EE  7.  All of these JSRs are also open for Expert Group nominations. Any JCP member can nominate themself to serve on the Expert Groups for these JSRs. Details on how to become a JCP member are on jcp.org. The JCP gives you a chance to have your own work become an official component of the Java platform and to offer suggestions for improving and growing the technology. Either way, everyone in the Java community benefits from your participation.There's a nice discussion about Java EE 7 in this podcast with Java EE spec lead Robert Chinnici and more information in this blog post on the Aquarium. It's exciting to see so much activity currently underway.

    Read the article

  • installing ubuntu on SSD

    - by kunal
    Going to install Ubuntu 10.10 on new intel x25M 80GB SSD. It will be fresh install. I have been googling for past few days and getting overwhelming articles/blogs/Q&As. One particularly very useful being: Optimize for SSD (I could not post other links as i dont have enough credits) But with so many suggestions and differences of opinions (on different links) this simple OS install process seems to be daunting task to me and I really want to stick with ubuntu (although have used for very short period of time). Can someone help me by answering few questions (yes they are repeated as i couldnt comprehend the answers elsewhere) which file system (ext2/3/4 or something else)? (consider SSD life) can it be changed after installation? should i partition the disk? (as we do in traditional HDD) for now, no plan of dual booting. Only ubuntu will live on scarce space of 80GB SSD. i have 2 GB RAM, should i still allocate swap space (if i dont allocate swap space, can i still hibernate the machine)? will swap space impact SSD life? should i consider putting additional 1GB RAM to avoid swap space? Linux experience - absolute novice intended usage - heavy browsing, programming, regular video/music and some other non-CPU/RAM-intensive programs. will backup big files to an external hard drive. laptop config - 3 yr old vaio, core2 duo, 2GB RAM Please pardon the repetition and i really appreciate anyone helping me getting started with ubuntu.

    Read the article

  • SAP Applications Certified for Oracle SPARC SuperCluster

    - by Javier Puerta
    SAP applications are now certified for use with the Oracle SPARC SuperCluster T4-4, a general-purpose engineered system designed for maximum simplicity, efficiency, reliability, and performance. "The Oracle SPARC SuperCluster is an ideal platform for consolidating SAP applications and infrastructure," says Ganesh Ramamurthy, vice president of engineering, Oracle. "Because the SPARC SuperCluster is a pre-integrated engineered system, it enables data center managers to dramatically reduce their time to production for SAP applications to a fraction of what a build-it-yourself approach requires and radically cuts operating and maintenance costs." SAP infrastructure and applications based on the SAP NetWeaver technology platform 6.4 and above and certified with Oracle Database 11g Release 2, such as the SAP ERP application and SAP NetWeaver Business Warehouse, can now be deployed using the SPARC SuperCluster T4 4. The SPARC SuperCluster T4-4 provides an optimized platform for SAP environments that can reduce configuration times by up to 75 percent, reduce operating costs up to 50 percent, can improve query performance by up to 10x, and can improve daily data loading up to 4x. The Oracle SPARC SuperCluster T4-4 is the world's fastest general purpose engineered system, delivering high performance, availability, scalability, and security to support and consolidate multi-tier enterprise applications with Web, database, and application components. The SPARC SuperCluster T4-4 combines Oracle's SPARC T4-4 servers running Oracle Solaris 11 with the database optimization of Oracle Exadata, the accelerated processing of Oracle Exalogic Elastic Cloud software, and the high throughput and availability of Oracle's Sun ZFS Storage Appliance all on a high-speed InfiniBand backplane. Part of Oracle's engineered systems family, the SPARC SuperCluster T4-4 demonstrates Oracle's unique ability to innovate and optimize at every layer of technology to simplify data center operations, drive down costs, and accelerate business innovation. For more details, refer to Our press release Datasheet: Oracle's SPARC SuperCluster T4-4 (PDF) Datasheet: Oracle's SPARC SuperCluster Now Supported by SAP (PDF) Video Podcast: Oracle's SPARC SuperCluster (MP4)

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • How to safely backup MySQL using VSS-based backup solutions

    - by Rhyven
    One of my clients is running MySQL on a Windows Server 2008 system. Their regular backups are performed using StorageCraft's ShadowCopy, which uses the VSS service to perform backups of open files. Some investigation indicates that MySQL is not entirely VSS-aware, and that the tables need to be locked prior to the shadow operation, then unlocked afterwards. There is a post at http://forum.storagecraft.com/Community/forums/p/548/2702.aspx which indicates the steps that need to be performed, however the user had some difficulty in performing them and no follow up solution was ever posted. Specifically, they succeeded in writing a batch file to lock the database, however once the batch file returns from MySQL it drops the connection and thus relinquishes the lock. I'm looking for a method that I can send the MySQL command FLUSH TABLES WITH READ LOCK, then perform the backup, then send UNLOCK TABLES when the backup is complete. Alternatively, I can exclude the MySQL data storage folder from backup, and schedule a mysqldump backup into a folder that will then be backed up by VSS. Can I have some recommendations please?

    Read the article

  • SQL Server 2008 Replication Promotion

    - by Stefan Mai
    I have a 4 node cluster, 1 subscriber and 3 publishers, all running SQL Server 2008 R2 Enterprise. The intention is that if the subscriber goes down, we can use one of the publishers to quickly build up its replacement. Our testing reveals a problem though: the subcriber databases all have Not For Replication set to Yes on the identity columns so that they can maintain the identity set in the subscriber. This causes a problem when they become subscribers because now we don't have identity insert functionality: we get a primary key error. Any way to "promote" a subscriber to publisher?

    Read the article

  • Multiline Replacement With Visual Studio

    - by Alois Kraus
    I had to remove some file headers in a bigger project which were all of the form #region File Header /*[ Compilation unit ----------------------------------------------------------       Name            : Class1.cs       Language        : C#     Creation Date   :      Description     : -----------------------------------------------------------------------------*/ /*] END */ #endregion I know that would be a cool thing to write a simple C# program use a recursive file search, read all lines skip the first n lines and write the files back to disc. But I wanted to test things first before I ruin my source files with one little typo. There comes the Visual Studio Search and Replace in Files dialog into the game. I can test my regular expression to do a multiline match with the Find button before actually breaking anything. And if something goes wrong I have the Undo button.   There is a nice blog post from Paulo Morgado online who deals with Multiline Regular expressions. The Visual Studio Regular expressions are non standard so you have to adapt your usual Regex know how to the other patterns. The pattern I cam finally up with is \#region File Header:b*(.*\n)@\#endregion The Regular expression can be read as \#region File Header Match “#region File Header” \# Escapes the # character since it is a quantifier. :b* After this none or more spaces or tabs can follow (:b stands for space or tab) (.*\n)@ Match anything across lines in a non greedy way (the @ character makes it non greedy) to prevent matching too much until the #endregion somewhere in our source file. \#endregion Match everything until “#endregion” is found I had always knew that Visual Studio can do it but I never bothered to learn the non standard Regex syntax. This is powerful and it is inside Visual Studio since 2005!

    Read the article

  • SQL SERVER – What is the Maximum Relational Database Size Supported by Single Instance?

    - by Pinal Dave
    I often get asked following question? “How much data SQL Server can handle?” Every single time when I get this question – I ask back following question - “How much data your storage system can handle?” The reason I ask this question back is because in reality for enterprise systems the limitation of storage is no more an issue. The Matter of the fact most of the database is now a days limited by the size of the storage system. SQL Server is enterprise system and it is very mature product. Even though if you still want to know what is the actual limit here is the answer. SQL Server 2008R2, 2012 and 2014 have maximum capacity of 524 PB (Petabyte) in the Enterprise, BI and Standard edition. SQL Server Express has a limitation of 10 GB due to its nature. I guess, now when you look at my question it will make sense that it is all depending on the size of your storage system. I personally believe at this point of time 524 PB is quite a huge data, but we never know after 10 years when we read this blog post, we all may think what was I thinking actually. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical.

    Read the article

  • How can I update fontconfig to a newer version in Red Hat 5.3?

    - by user16654
    I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.

    Read the article

  • How do I get my server to recognise a change in users?

    - by Gareth
    I'm new in post running a small charity. As a result I've inherited a system from the previous manager that was poorly run. In my enthusiasm to change login details etc. I appear to have killed my microsoft outlook account. When trying to access outlook it prompts me for my user name, domain name and password - which I'm obviously not filling in correctly because it's not accepting my answers. Is there a way to determine that information from the server side? I have access to the server and the staff computers.

    Read the article

  • Changes user's email on crowd doesn't apply in confluence

    - by donamir
    Hi It's my configuration: Atlassian Crowd Version: 2.0.2 (Build:#409 - 06-10-2009) Atlassian Confluence 3.0.2 I can't change users' emails in Confluence because I use Crowd as external user management. So I have to change users' emails in Crowd but changes doesn't apply in Confluence. Even users has access to crowd user console and can login to Crowd and view their profile and edit their email password but changed emails doesn't apply in Confluence. What should I do? Is it a bug? Also I should mention that emails contains dot character(.). I posted my question to Atlassian forum 3 days ago but there's no answer yet!

    Read the article

  • Monitoring bespoke software with Zenoss

    - by Andy S
    We've got a lot of back-end applications that we need to monitor the performance of (metrics such as orders waiting to be processed, time since last run, etc). Currently, this is done by an in-house watchdog application that fires out emails whenever a threshold is exceeded, but there's no way to acknowledge an issue and squelch these alerts. Rather than build our own complete alerting system, we'd like to tie in to the Zenoss installation we use to monitor our servers. I've found a few articles on creating events programmatically, but I'd rather Zenoss itself monitors the values that the current watchdog app is looking at (so we get the benefits of graphing and history as well). Is it possible, then, to programmatically provide a data feed (rather than an event) to Zenoss? Or is there another way to go about this?

    Read the article

  • Whats the greatest most impressive programing feat you ever witnessed? [closed]

    - by David Reis
    Everyone knows of the old adage that the best programmers can be orders of magnitude better than the average. I've personally seen good code and programmers, but never something so absurd. So the questions is, what is the most impressive feat of programming you ever witnessed or heard of? You can define impressive by: The scope of the task at hand e.g. John single handedly developed the framework for his company, a work comparable in scope to what the other 200 employed were doing combined. Speed e.g. Stu programmed an entire real time multi-tasking app OS on an weekened including its own C compiler and shell command line tools Complexity e.g. Jane rearchitected our entire 10 millon LOC app to work in a cluster of servers. And she did it in an afternoon. Quality e.g. Charles's code had a rate of defects per LOC 100 times lesser than the company average. Furthermore he code was clean and understandable by all. Obviously, the more of these characteristics combined, and the more extreme each of them, the more impressive is the feat. So, let me have it. What's the most absurd feat you can recount? Please provide as much detail as possible and try to avoid urban legends or exaggerations. Post only what you can actually vouch for. Bonus questions: Was the herculean task a one-of, or did the individual regularly amazed people? How do you explain such impressive performance? How was the programmer recognized for such awesome work?

    Read the article

  • Why are all Linux commands broken after installing Perl?

    - by user115079
    I installed perl using following command: curl -L http://xrl.us/installperlnix | bash after that i run following command to create soft link ln -sf /usr/local/bin/perl /usr/bin/perl now I'm trying to run commands like dir, mkdir, ll, rm, vi but nothing seems to be working for me. also when i try to login into my shell i get following msg at startup: Last login: Wed Apr 4 21:50:12 2012 from x.y.z.ip -bash: perl: command not found please help. Here is system detail: cat /proc/version Linux version 2.6.18-274.18.1.el5.028stab098.1 (root@rhel5-build-x64) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Sat Feb 11 15:30:41 MSK 2012 cat /etc/issue CentOS 5.7 32 bit Kernel \r on an \m Don't know if perl was already installed or not. and now i can't check.

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • UEFI Boot Failure - Hang after Printing USB Information

    - by James
    I'm experiencing a really weird boot problem. With both 12.10 and 12.04LTS, the vast majority of kernels (and initrds) that I've tried boot, but hang immediately after printing out information about USB devices. This isn't exactly a full "hang" so to speak, as if I plug in a flash drive, I see information and a /dev/sd* entry printed to the screen. Post/pre-init scripts are not run, there is no handoff, nor busybox or VT prompt. Virtual terminals can't be changed (with Ctrl-Alt-Fx). For what I can see, init may have not been executed yet. With certain kernel and OS combinations however, (specifically 3.2.0-29), I get a full boot and am able to use the OS as if there is no problem. After 3.2.0-29, I've been hard pressed to find a kernel that works. Any idea what's happening or how to fix this? Or even a road to take? I've exhausted the first five pages of Google for every search term I can think of. This is a Lenovo Z580 (i5-3210M) with Phoenix SecureCore Tiano firmware, if that helps any.

    Read the article

  • How can I rank teams based off of head to head wins/losses

    - by TMP
    I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then it goes down to point differentials. Here's an example: A beat B two times B beats C one time A beats D three times C bests D two times D beats C one time B beats A one time Which sort of reduces to A[B] = 2 B[C] = 1 A[D] = 3 C[D] = 2 D[C] = 1 B[A] = 1 Which sort of reduces to A[B] = 1 B[C] = 1 A[D] = 3 C[D] = 1 D[C] = -1 B[A] = -1 Which is about how far I've got I think the results of this specific algorithm would be: A, B, C, D But I'm stuck on how to transition from my nested hash-like structure to the results. My psuedo-code is as follows (I can post my ruby code too if someone wants): For each game(g): hash[g.winner][g.loser] += 1 That leaves hash as the first reduction above hash2 = clone of hash For each key(winner), value(losers hash) in hash: For each key(loser), value(losses against winner): hash2[loser][winner] -= losses Which leaves hash2 as the second reduction Feel free to as me question or edit this to be more clear, I'm not sure of how to put it in a very eloquent way. Thanks!

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >