Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 717/911 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • Getting Phing's dbdeploy task to automatically rollback on delta error

    - by Gordon
    I am using Phing's dbdeploy task to manage my database schema. This is working fine, as long as there is no errors in the queries of my delta files. However, if there is an error, dbdeploy will just run the delta files up to the query with the error and then abort. This causes me some frustration, because I have to manually rollback the entry in the changelog table then. If I don't, dbdeploy will assume the migration was successful on a subsequent try. So the question is, is there any way to get dbdeploy use transactions or can you suggest any other way to have phing rollback automatically when an error occurs? Note: I'm not that proficient with Phing, so if this involves writing a custom task, any example code or a url with further information is highly appreciated. Thanks

    Read the article

  • WPF absolute positioning in InkCanvas

    - by Nilu
    Hi, I'm trying to position a rectangle in an InkCanvas. I am using the following method. Unfortunately when I add the rectangle it gets displayed at (0,0). Although when I query to see the whether the left property is 0 I get a non zero values. Does anyone know why this might be? Cheers, Nilu InkCanvas _parent = new InkCanvas(); private void AddDisplayRect(Color annoColour, Rect bounds) { Rectangle displayRect = new Rectangle(); Canvas.SetTop(displayRect, bounds.Y); Canvas.SetLeft(displayRect, bounds.X); // check to see if the property is set Trace.WriteLine(Canvas.GetLeft(displayRect)); displayRect.Width = bounds.Width; displayRect.Height = bounds.Height; displayRect.Stroke = new SolidColorBrush(annoColour); displayRect.StrokeThickness = 1; _parent.Children.Add(displayRect); }

    Read the article

  • Hierarchy based aggregation

    - by Ganapathy Subramaniam
    I have a hierarchy table in SQL Server 2005 which contains employees - managers - department - location - state. Sample table for hierarchy table: ID Name ParentID Type 1 PA NULL 0 (group) 2 Pittsburgh 1 1 (subgroup) 3 Accounts 2 1 4 Alex 3 2 (employee) 5 Robin 3 2 6 HR 2 1 7 Robert 6 2 Second one is fact table which contains employee salary details ID and Salary. Sample data for fact table: ID Salary 4 6000 5 5000 7 4000 Is there any good to way to display the hierarchy from hierarchy table with aggregated sum of salary based on employees. Expected result is like Name Salary PA 15000 (Pittsburgh + others(if any)) Pittusburgh 15000 (Accounts + HR) Accounts 11000 (Alex + Robin) Alex 6000 (direct values) Robin 5000 HR 4000 Robert 4000 In my production environment, hierarchy table may contain 23000+ rows and fact table may contain 300,000+ rows. So, I thought of providing any level of groupid to the query to retrieve just its children and its corresponding aggregated value. Any better solution?

    Read the article

  • Nhibernate Criteria Group By clause and select other data that is not grouped

    - by Peter R
    Hi, I have a parent child relationship, let's say class and children. Each child belongs to a class and has a grade. I need to select the children (or the ids of the children) with the lowest grade per class. session.CreateCriteria(typeof(Classs)) .CreateAlias("Children", "children") .SetProjection(Projections.ProjectionList() .Add(Projections.Min("children.Grade")) .Add(Projections.GroupProperty("Id")) ) .List<Object[]>(); This query returns me the lowest grade per class, but I don't know which child got the grade. When I add the children's Id to the group, the group is wrong and every child gets returned. I was hoping we could just select get the id's of those childs without grouping them. If this is not possible, then maybe there is a way to solve this with subqueries?

    Read the article

  • [ASP.NET] Odd HttpRequest behaviour

    - by barguast
    I have a web service which runs with a HttpHandler class. In this class, I inspect the request stream for form / query string parameters. In some circumstances, it seemed as though these parameters weren't getting through. After a bit of digging around, I came across some behaviour I don't quite understand. See below: // The request contains 'a=1&b=2&c=3' // TEST ONLY: Read the entire request string contents; using (StreamReader sr = new StreamReader(context.Request.InputStream)) { contents = sr.ReadToEnd(); } // Here 'contents' is usually correct - containing 'a=1&b=2&c=3'. Sometimes it is empty. string a = context.Request["a"]; // Here, a = null, regardless of whether the 'contents' variable above is correct Can anyone explain to me why this might be happening? I'm using a .NET WebClient and UploadDataAsync to perform the request on the client if that makes any difference. If you need any more information, please let me know.

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • how to fire the function event on PageLoad event of XtraReport ?

    - by ahmed
    I have a situation, I am working on a XtraReport where I am pulling some data from table.Now I created a function and passing a parameter at runtime for the report to generate. But when I fire the function event from a Onclick Button the result is perfect but when I fire the same function event on the page load of the ReportViewer ..I do not get the result. About function , i am creating dataset, and adapter with Sqlquery from runtime passing the parameter value from a session in the query and binding the data to the labels. But as I said I want the result on the pageload event of the reportviewer.

    Read the article

  • Trouble with MySQL: CONCAT_WS(' ', name_first, name_middle, name_last) like '%keyword%'

    - by AJB
    hey folks, I'm setting up a keyword search across multiple fields: name_first, name_middle, name_last but I'm not getting the results I'd like. Here's the query: "SELECT accounts_users.user_ID, users.name_first, users.name_middle, users.name_last, users.company FROM accounts_users, users WHERE accounts_users.account_ID = '$account_ID' AND accounts_users.user_ID = users.id AND CONCAT_WS(' ', users.name_first, users.name_middle, users.name_last) LIKE '$user_keyword%' ORDER BY users.name_first ASC" So, if I've got three names in the DB: Aaron J Ban Aaron J Can Bob L Lawblaw And if the user_keyword == "bob lawblaw" I get no result. If user_keyword == "bob L" then it returns Bob L Lawblaw. Obviously I can't force people to include the persons middle name in their keyword search but I'm stuck for the proper way to do this. All help is greatly appreciated.

    Read the article

  • MySql timeouts - Should I set autoReconnect=true in Spring application?

    - by George
    After periods of inactivity on my website (Using Spring 2.5 and MySql), I get the following error: org.springframework.dao.RecoverableDataAccessException: The last packet sent successfully to the server was 52,847,830 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. According to this question, and the linked bug, I shouldn't just set autoReconnect=true. Does this mean I have to catch this exception on any queries I do and then retry the transaction? Should that logic be in the data access layer, or the model layer? Is there an easy way to handle this instead of wrapping every single query to catch this?

    Read the article

  • Odd SQL Results

    - by Ryan Burnham
    So i have the following query Select id, [First], [Last] , [Business] as contactbusiness, (Case When ([Business] != '' or [Business] is not null) Then [Business] Else 'No Phone Number' END) from contacts The results look like id First Last contactbusiness (No column name) 2 John Smith 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number I'd expect record 2 to also show No Phone Number If i change the "[Business] is not null" to [Business] != null then i get the correct results id First Last contactbusiness (No column name) 2 John Smith No Phone Number 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number Normally you need to use is not null rather than != null. whats going on here?

    Read the article

  • SQL Server 2008 DBNETLIB error

    - by Joseph
    Hi all Our ASp.net applications getting error as below" [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied " But i can connect with Enterprise manager management studio and Query analaizer with out any issue. It was running these applications with out any issue long time. last one week we are getting this error.If we restart the server .it works then it will come again after 3to 4 hours. We are running on Windows 2003 server. I was seraching and didnt find a solution yet. If anybody knows anything for this error, please post the details to resolve. Thank you in Advance Joseph

    Read the article

  • Running SQL script through psql gives syntax errors that don't occur in PgAdmin

    - by Peter
    Hi I have the following script to create a table: -- Create State table. DROP TABLE IF EXISTS "State" CASCADE; CREATE TABLE "State" ( StateID SERIAL PRIMARY KEY NOT NULL, StateName VARCHAR(50) ); It runs fine in the query tool of PgAdmin. But when I try to run it from the command line using psql: psql -U postgres -d dbname -f 00101-CreateStateTable.sql I get a syntax error as shown below. 2: ERROR: syntax error at or near "" LINE 1: ^ psql:00101-CreateStateTable.sql:6: NOTICE: CREATE TABLE will create implicit sequence "State_stateid_seq" for serial column "State.stateid" psql:00101-CreateStateTable.sql:6: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "State_pkey" for table "State" CREATE TABLE Why do I get a syntax error using psql and not with PGAdmin? Kind regards Peter

    Read the article

  • Linked servers SQLNCLI problem. "No transaction is active"

    - by Felipe Fiali
    Im trying to execute a stored procedure and simply insert its results in a temporary table, and I'm getting the following message: The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "MyServerName" was unable to begin a distributed transaction. OLE DB provider "SQLNCLI" for linked server "MyServerName" returned message "No transaction is active.". My query looks like this: INSERT INTO #TABLE EXEC MyServerName.MyDatabase.dbo.MyStoredProcedure Param1, Param2, Param3 Exact column number, names, the problem is not the result. MSDTC is allowed and started in both computers, Remote procedure calling too. The machines are not in the same domain, but I can execute remote queries from my machine and get the result. I can even execute the stored procedure and see its results, I just can't insert it in another table. Help, please? :)

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • Middleware for MongoDB or CouchDB with jQuery Ajax/JSON frontend

    - by Tauren
    I've been using the following web development stack for a few years: java/spring/hibernate/mysql/jetty/wicket/jquery For certain requirements, I'm considering switching to a NoSQL datastore with an AJAX frontend. I would probably build the frontend with jQuery and communicate with the web application middleware using JSON. I'm leaning toward MongoDB because of more dynamic query capabilities, but am still considering CouchDB. I'm not sure what to use in the middle. Probably something RESTful? My preference is to stick with Java (or maybe Scala or Groovy) since I'm using tools like Drools for rules and Shiro for security. But then again, I want to pick something that is quick an easy to work with, so I'm open to other solutions. If you are building ajax/json/nosql solutions, I'd like to hear details about what tools you are using and any pros/cons you've found to using them. Thanks!

    Read the article

  • Switching from LinqToXYZ to LinqToObjects

    - by spender
    In answering this question, it got me thinking... I often use this pattern: collectionofsomestuff //here it's LinqToEntities .Select(something=>new{something.Name,something.SomeGuid}) .ToArray() //From here on it's LinqToObjects .Select(s=>new SelectListItem() { Text = s.Name, Value = s.SomeGuid.ToString(), Selected = false }) Perhaps I'd split it over a couple of lines, but essentially, at the ToArray point, I'm effectively enumerating my query and storing the resulting sequence so that I can further process it with all the goodness of a full CLR to hand. As I have no interest in any kind of manipulation of the intermediate list, I use ToArray over ToList as there's less overhead. I do this all the time, but I wonder if there is a better pattern for this kind of problem?

    Read the article

  • OpenDataSource fails pls help

    - by Vivek Chandraprakash
    I'm trying export records from SQL Server 2008 to mdb file using OpenDataSource. It works when I log in using Windows authentication. But it fails when I use SQL Server authentication. This is the error i get OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" returned message "Could not delete from specified tables.". Msg 7320, Level 16, State 2, Procedure EXPORT_Employee, Line 110 Cannot execute the query "DELETE FROM employee_export " against OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)". Thanks -Vivek

    Read the article

  • Are the svn ruby bindings provided as a gem?

    - by user30997
    I see a couple dozen gems that relate to svn, but what little documentation I can find on any of them shows that they are command-line wrappers and misc helpers. (svn-command, svn-hooks, etc.) I've seen code in the wild that does things like: require 'svn/core' and SVN.Repos.add(...), but the author of that module pulled his svn ruby tools via apt-get. This would not be an option for me, as I'm developing a windows/osx tool. Which gem am I after? From there, I'm happy to dig through code in lieu of documents, but with a call to gem query --name-matches svn --remote returning about 30 hits, I need to narrow it down a bit first.

    Read the article

  • What does using RESTful URLs buy me?

    - by Spike Williams
    I've been reading up on REST, and I'm trying to figure out what the advantages to using it are. Specifically, what is the advantage to REST-style URLs that make them worth implementing over a more typical GET request with a query string? Why is this URL: http://www.parts-depot.com/parts/getPart?id=00345 Considered inferior to this? http://www.parts-depot.com/parts/00345 In the above examples (taken from here) the second URL is indeed more elegant looking and concise. But it comes at a cost... the first URL is pretty easy to implement in any web language, out of the box. The second requires additional code and/or server configuration to parse out values, as well as additional documentation and time spent explaining the system to junior programmers and justifying it to peers. So, my question is, aside from the pleasure of having URLs that look cool, what advantages do RESTful URLs gain for me that would make using them worth the cost of implementation?

    Read the article

  • Returning to last viewed List page after insert/edit with ASP.NET Dynamic Data

    - by Pat James
    With a pretty standard Dynamic Data site, when the user edits or inserts a new item and saves, the page does a Response.Redirect(table.ListActionPath), which takes the user back to page 1 of the table. If they were editing an item on page 10 (or whatever) and want to edit the next item on that page, they have to remember the page number and navigate back to it. What's the best way to return the user to the list page they last viewed? I can conceive of some solutions using cookies, session state, or query string values to retain this state and making my own Page Template to incorporate it, but I can't help thinking this must be something that was considered when Dynamic Data was created, and there must be something simpler or built-in to the framework that I'm missing here.

    Read the article

  • How to order by a known id value in mysql

    - by markj
    Hi, is it possible to order a select from mysql by using a known id first. Example: $sql = "SELECT id, name, date FROM TABLE ORDER BY "id(10)", id DESC Not sure if the above makes sense but what I would like to do is first start ordering by the known id which is 10 which I placed in quotes "id(10)". I know that cannot work as is in the query but it's just to point out what I am trying to do. If something similar can work I would appreciate it. Thanks.

    Read the article

  • Oracle Linked Server error: ORA-12640: Authentication adapter initialization failed

    - by Chenster
    I have a linked server on SQL Server that talks to Oracle. Executing the following sql statement using Openquery SELECT * FROM OPENQUERY(finance, 'select * from KFRI.VW_XREF_PROJECTS') will get error as the following: OLE DB provider "OraOLEDB.Oracle" for linked server "finance" returned message "ORA-12640: Authentication adapter initialization failed". Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source object of OLE DB provider "OraOLEDB.Oracle" for linked server "finance". I tried to set : SQLNET.AUTHENTICATION_SERVICES= (NONE) in {$ORACLE_HOME}\NETWORK\ADMIN\sqlnet.ora. It did not help. What's interesting is my coworker is able to execute the exactly same query successfully on his machine without a hitch. Any tips on how to fix this is greatly appreciated!!

    Read the article

  • Efficient counting of an association’s association

    - by Matthew Robertson
    In my app, when a User makes a Comment in a Post, Notifications are generated that marks that comment as unread. class Notification < ActiveRecord::Base belongs_to :user belongs_to :post belongs_to :comment class User < ActiveRecord::Base has_many :notifications class Post < ActiveRecord::Base has_many :notifications I’m making an index page that lists all the posts for a user and the notification count for each post for just that user. # posts controller @posts = Post.where( :user_id => current_user.id ) .includes(:notifications) # posts view @posts.each do |post| <%= post.notifications.count %> This doesn’t work because it counts notifications for all users. What’s an efficient way to do this without running a separate query for each post?

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • Remove characters after specific character in string, then remove substring?

    - by sah302
    I feel kind of dumb posting this when this seems kind of simple and there are tons of questions on strings/characters/regex, but I couldn't find quite what I needed (except in another language: http://stackoverflow.com/questions/2176544/remove-all-text-after-certain-point). I've got the following code: [Test] public void stringManipulation() { String filename = "testpage.aspx"; String currentFullUrl = "http://localhost:2000/somefolder/myrep/test.aspx?q=qvalue"; String fullUrlWithoutQueryString = currentFullUrl.Replace("?.*", ""); String urlWithoutPageName = fullUrlWithoutQueryString.Remove(fullUrlWithoutQueryString.Length - filename.Length); String expected = "http://localhost:2000/somefolder/myrep/"; String actual = urlWithoutPageName; Assert.AreEqual(expected, actual); } I tried the solution in the question above (hoping the syntax would be the same!) but nope. I want to first remove the queryString which could be any variable length, then remove the page name, which again could be any length. How can I get the remove the query string from the full URL such that this test passes?

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >