Search Results

Search found 891 results on 36 pages for 'comma'.

Page 32/36 | < Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQL 2008 R2 Named Instance Client Connectivity Issues?

    - by Jerry Dodge
    We're upgrading our software from using SQL 2000 to 2008 R2. Our customers will be installing an update which uninstalls 2000 and installs 2008 R2 under the same instance. So if no instance existed, then no instance name will be set (default). However, the problem starts with the customers which have a named SQL instance. Starting in 2008 R2 (not sure of ones before), for some reason, a client connecting to the server by its instance name is unsuccessful. I'm testing from the Management Studio - if I can't connect this, then nothing can connect. I browse network servers, and find the specific server\instance in the list. But, upon trying to connect to an instance name like MyServer\INST, I get: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) I do in fact have TCP/IP and Named Pipes protocols enabled, this is the first thing I did. When I connect to the server using a comma (,) and port number like MyServer, 49195, it works just fine. So it appears that client computers are just unable to identify the instance names. This has happened on all our installations of SQL 2008 R2 and from all client computers, including Win 7, XP, Vista, Server 2008, and Server 2003. We never experienced such issues on earlier versions of SQL. The problem even persists if the firewalls and antiviruses are all disabled. Now, this is a large update which we will be distributing soon to all our customers, and we want to minimize the interaction they need with us to get this installed. We absolutely hate the idea of using a port number, because it will always be different, and we would have to modify each client to point to this server/port. Some of our customers may have hundreds of client computers. How do I make client connections to a named SQL instance work again? After all, this is the whole purpose of named instances, and if a client can't connect to this instance by its name, then what is it even named for? EDIT It was mentioned to make sure SQL Browser is running, so I checked, and it is running. The server is also able to connect to its self (locally) - just external connections are refused. UPDATE After more careful checking, I learned the firewall wasn't completely disabled when testing, and upon disabling it completely, this works. So it appears that SQL Browser is being blocked by the firewall from external clients from accessing.

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Working with Reporting Services Filters–Part 1

    - by smisner
    There are two ways that you can filter data in Reporting Services. The first way, which usually provides a faster performance, is to use query parameters to apply a filter using the WHERE clause in a SQL statement. In that case, the structure of the filter depends upon the syntax recognized by the source database. Another way to filter data in Reporting Services is to apply a filter to a dataset, data region, or a group. Using this latter method, you can even apply multiple filters. However, the use of filter operators or the setup of multiple filters is not always obvious, so in this series of posts, I'll provide some more information about the configuration of filters. First, why not use query parameters exclusively for filtering? Here are a few reasons: You might want to apply a filter to part of the report, but not all of the report. Your dataset might retrieve data from a stored procedure, and doesn't allow you to pass a query parameter for filtering purposes. Your report might be set up as a snapshot on the report server and, in that case, cannot be dynamically filtered based on a query parameter. Next, let's look at how to set up a report filter in general. The process is the same whether you are applying the filter to a dataset, data region, or a group. When you go to the Filters page in the Properties dialog box for whichever of these items you selected (dataset, data region, group), you click the Add button to create a new filter. The interface looks like this: The Expression field is usually a field in the dataset, so to make it easier for you to make a selection,the drop-down list displays all of the current dataset fields. But notice the expression button to the right, which means that you can set up any type of expression-not just a dataset field. To the right of the expression button, you'll find a data type drop-down list. It's important to specify the correct data type for the field or expression you're using. Now for the operators. Here's a list of the options that you have: This Operator Performs This Action =, <>, >, >=, <, <=, Like Compares expression to value Top N, Bottom N Compares expression to Top (Bottom) set of N values (N = integer) Top %, Bottom % Compares expression to Top (Bottom) N percent of values (N = integer or float) Between Determines whether expression is between two values, inclusive In Determines whether expression is found in list of values Last, the Value is what you're comparing to the expression using the operator. The construction of a filter using some operators (=, <>, >, etc.) is fairly simple. If my dataset (for AdventureWorks data) has a Category field, and I have a parameter that prompts the user for a single category, I can set up a filter like this: Expression Data Type Operator Value [Category] Text = [@Category] But if I set the parameter to accept multiple values, I need to change the operator from = to In, just as I would have to do if I were using a query parameter. The parameter expression, [@Category], which translates to =Parameters!Category.Value, doesn’t need to change because it represents an array as soon as I change the parameter to allow multiple values. The “In” operator requires an array. With that in mind, let’s consider a variation on Value. Let’s say that I have a parameter that prompts the user for a particular year – and for simplicity’s sake, this parameter only allows a single value, and I have an expression that evaluates the previous year based on the user’s selection. Then I want to use these two values in two separate filters with an OR condition. That is, I want to filter either by the year selected OR by the year that was computed. If I create two filters, one for each year (as shown below), then the report will only display results if BOTH filter conditions are met – which would never be true. Expression Data Type Operator Value [CalendarYear] Integer = [@Year] [CalendarYear] Integer = =Parameters!Year.Value-1 To handle this scenario, we need to create a single filter that uses the “In” operator, and then set up the Value expression as an array. To create an array, we use the Split function after creating a string that concatenates the two values (highlighted in yellow) as shown below. Expression Data Type Operator Value =Cstr(Fields!CalendarYear.Value) Text In =Split( CStr(Parameters!Year.Value) + ”,” + CStr(Parameters!Year.Value-1) , “,”) Note that in this case, I had to apply a string conversion on the year integer so that I could concatenate the parameter selection with the calculated year. Pay attention to the second argument of the Split function—you must use a comma delimiter for the result to work correctly with the In operator. I also had to change the Expression value from [CalendarYear] (or =Fields!CalendarYear.Value) so that the expression would return a string that I could compare with the values in the string array. More fun with filter expressions in future posts!

    Read the article

  • Converting LINQ to Twitter to Twitter API v1.1

    - by Joe Mayo
    Twitter recently updated their API to v1.1 (Current status: API v1.1). Naturally, LINQ to Twitter  needed to be updated too. This blog post outlines the changes made to LINQ to Twitter during this conversion and highlights important features that LINQ to Twitter developers will want to know. Overall Impact Generally speaking, Twitter API v1.1 is semantically very much the same as it’s predecessor. The base URL changed and so did a few resource segments, but the resources themselves are still intact. The good news is that LINQ to Twitter has always shielded the developer from this plumbing, so the entities, types, and filters didn’t change much at all.  The following sections describe what did  change. Authentication In Twitter API v1.0 authentication was not required for some resources, such as user timelines and search. However, that’s all changed because *all* queries must be authenticated in Twitter API v1.1. LINQ to Twitter has various types of authorizers you can use, supporting whatever OAuth options are available via Twitter.  You can see the LINQ to Twitter documentation, Securing Your Applications, for more info on OAuth support. The New Search One of the larger changes to the API was Search. To be more specific, the Search entity now contains a List<Status>, named Statuses, to hold results.  Additionally, any meta-data associated with the search is now in a property named SearchMetaData. The change to the Search entity and responses is the big change, but the good news is that your Search query syntax doesn’t change. Different Rate Limits The issue of rate limits itself is contentious, but this discussion is focused on the coding experience and I’ll leave the politics to those who prefer to engage in that activity. What’s important here is that both headers and resources have changed. You should review Twitter’s Rate Limit documentation to understand what the changes mean.  A quick explanation is that rate limits are applied individually to each resource in 15 minute time intervals. In LINQ to Twitter these changes surface on the Help entity, via HelpType.RateLimits. The RateLimits query has a Resources filter where you can specify a comma-separated list of categories to return rate limit info for.  The results materialize in the RateLimits dictionary, keyed on category. The Help entity also has a RateLimitsAuthorizationContext, holding the Access Token for the user performing queries – and to whom the rate limits apply. In addition to the new RateLimits query, there are new RateLimit headers that appear in the query response, whose HTTP header name is of the form X-Rate-Limit… which is different from the previous header name. LINQ to Twitter surfaces these headers via the existing properties of the TwitterContext instance. For anyone who retrieved rate limit information via the Headers property of TwitterContext, you should be aware of the new header names.  I haven’t done anything with Feature rate limit properties yet, but they appear to no longer be available – this will require more follow-up. Error Handling Twitter API v1.1 has a new format for Error Codes & Responses. LINQ to Twitter wraps these messages in the TwitterQueryException, which has been updated appropriately. The Message property of TwitterQueryException now reflects the Twitter error message, when available. There’s also a new ErrorCode that’s populated with the message error code. Parameters Most parameters stayed the same, but one of interest is Include Entities (different from LINQ to Twitter data object entities). Entities are metadata hanging off tweets, that provide start/end position in the tweet and other information for mentions, urls, hash tags, and media. Entities used to not be included unless you specified you wanted them. Now, in v1.1, entities are included by default for all APIs that return a Status.  If you were always setting IncludeEntities to true, then you won’t see a change. However, be aware that you’ll now be receiving additional data in your response from Twitter, which will explain a sudden increase in bandwidth utilization. This might or might not  matter to you  depending on the requirements of your application, but you should be aware of it. Everything Else There might be small changes here and there that I haven’t mentioned, but these were the ones you should be most aware of.  Streams didn’t change, but Twitter will be deprecating username/password authentication on public streams, in favor of OAuth, so you’ll be seeing me make that change some time in the future.  Also, Twitter will continue to evolve the API and you can expect that LINQ to Twitter will change accordingly. Summary The big changes to Twitter API were Authentication, Search, Rate Limits, and Error Handling. All API calls must be authenticated. You’ll need to change your code to read Search results differently, but the query is much the same as you use now. There’s a new RateLimits API, one of the Help queries.  Also, the new error messages are integrated into TwitterQueryException. Besides these changes, I expect  most others to be small or affect a smaller percentage of developers.  You can get the latest version of LINQ to Twitter from NuGet or visit the LINQ to Twitter download page at CodePlex.com.   @JoeMayo

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • OIM 11g - Multi Valued attribute reconciliation of a child form

    - by user604275
    This topic gives a brief description on how we can do reconciliation of a child form attribute which is also multi valued from a flat file . The format of the flat file is (an example): ManagementDomain1|Entitlement1|DIRECTORY SERVER,EMAIL ManagementDomain2|Entitlement2|EMAIL PROVIDER INSTANCE - UMS,EMAIL VERIFICATION In OIM there will be a parent form for fields Management domain and Entitlement.Reconciliation will assign Servers ( which are multi valued) to corresponding Management  Domain and Entitlement .In the flat file , multi valued fields are seperated by comma(,). In the design console, Create a form with 'Server Name' as a field and make it a child form . Open the corresponding Resource Object and add this field for reconcilitaion.While adding , choose 'Multivalued' check box. (please find attached screen shot on how to add it , Child Table.docx) Open process definiton and add child form fields for recociliation. Please click on the 'Create Reconcilitaion Profile' buttton on the resource object tab. The API methods used for child form reconciliation are : 1.           reconEventKey =   reconOpsIntf.createReconciliationEvent(resObjName, reconData,                                                            false); ·                                    ‘False’  here tells that we are creating the recon for a child table . 2.               2.       reconOpsIntf.providingAllMultiAttributeData(reconEventKey, RECON_FIELD_IN_RO, true);                RECON_FIELD_IN_RO is the field that we added in the Resource Object while adding for reconciliation, please refer the screen shot) 3.    reconOpsIntf.addDirectBulkMultiAttributeData(reconEventKey,RECON_FIELD_IN_RO, bulkChildDataMapList);                 bulkChildDataMapList  is coded as below :                 List<Map> bulkChildDataMapList = new ArrayList<Map>();                   for (int i = 0; i < stokens.length; i++) {                            Map<String, String> attributeMap = new HashMap<String, String>();                           String serverName = stokens[i].toUpperCase();                           attributeMap.put("Server Name", stokens[i]);                           bulkChildDataMapList.add(attributeMap);                         } 4                  4.       reconOpsIntf.finishReconciliationEvent(reconEventKey); 5.       reconOpsIntf.processReconciliationEvent(reconEventKey); Now, we have to register the plug-in, import metadata into MDS and then create a scheduled job to execute which will run the reconciliation.

    Read the article

  • IsNumeric() Broken? Only up to a point.

    - by Phil Factor
    In SQL Server, probably the best-known 'broken' function is poor ISNUMERIC() . The documentation says 'ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0. ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).'Although it will take numeric data types (No, I don't understand why either), its main use is supposed to be to test strings to make sure that you can convert them to whatever numeric datatype you are using (int, numeric, bigint, money, smallint, smallmoney, tinyint, float, decimal, or real). It wouldn't actually be of much use anyway, since each datatype has different rules. You actually need a RegEx to do a reasonably safe check. The other snag is that the IsNumeric() function  is a bit broken. SELECT ISNUMERIC(',')This cheerfully returns 1, since it believes that a comma is a currency symbol (not a thousands-separator) and you meant to say 0, in this strange currency.  However, SELECT ISNUMERIC(N'£')isn't recognized as currency.  '+' and  '-' is seen to be numeric, which is stretching it a bit. You'll see that what it allows isn't really broken except that it doesn't recognize Unicode currency symbols: It just tells you that one numeric type is likely to accept the string if you do an explicit conversion to it using the string. Both these work fine, so poor IsNumeric has to follow suit. SELECT  CAST('0E0' AS FLOAT)SELECT  CAST (',' AS MONEY) but it is harder to predict which data type will accept a '+' sign. SELECT  CAST ('+' AS money) --0.00SELECT  CAST ('+' AS INT)   --0SELECT  CAST ('+' AS numeric)/* Msg 8115, Level 16, State 6, Line 4 Arithmetic overflow error converting varchar to data type numeric.*/SELECT  CAST ('+' AS FLOAT)/*Msg 8114, Level 16, State 5, Line 5Error converting data type varchar to float.*/> So we can begin to say that the maybe IsNumeric isn't really broken, but is answering a silly question 'Is there some numeric datatype to which i can convert this string? Almost, but not quite. The bug is that it doesn't understand Unicode currency characters such as the euro or franc which are actually valid when used in the CAST function. (perhaps they're delaying fixing the euro bug just in case it isn't necessary).SELECT ISNUMERIC (N'?23.67') --0SELECT  CAST (N'?23.67' AS money) --23.67SELECT ISNUMERIC (N'£100.20') --1SELECT  CAST (N'£100.20' AS money) --100.20 Also the CAST function itself is quirky in that it cannot convert perfectly reasonable string-representations of integers into integersSELECT ISNUMERIC('200,000')       --1SELECT  CAST ('200,000' AS INT)   --0/*Msg 245, Level 16, State 1, Line 2Conversion failed when converting the varchar value '200,000' to data type int.*/  A more sensible question is 'Is this an integer or decimal number'. This cuts out a lot of the apparent quirkiness. We do this by the '+E0' trick. If we want to include floats in the check, we'll need to make it a bit more complicated. Here is a small test-rig. SELECT  PossibleNumber,         ISNUMERIC(CAST(PossibleNumber AS NVARCHAR(20)) + 'E+00') AS Hack,        ISNUMERIC (PossibleNumber + CASE WHEN PossibleNumber LIKE '%E%'                                          THEN '' ELSE 'E+00' END) AS Hackier,        ISNUMERIC(PossibleNumber) AS RawIsNumericFROM    (SELECT CAST(',' AS NVARCHAR(10)) AS PossibleNumber          UNION SELECT '£' UNION SELECT '.'         UNION SELECT '56' UNION SELECT '456.67890'         UNION SELECT '0E0' UNION SELECT '-'         UNION SELECT '-' UNION SELECT '.'         UNION  SELECT N'?' UNION SELECT N'¢'        UNION  SELECT N'?' UNION SELECT N'?34.56'         UNION SELECT '-345' UNION SELECT '3.332228E+09') AS examples Which gives the result ... PossibleNumber Hack Hackier RawIsNumeric-------------- ----------- ----------- ------------? 0 0 0- 0 0 1, 0 0 1. 0 0 1¢ 0 0 1£ 0 0 1? 0 0 0?34.56 0 0 00E0 0 1 13.332228E+09 0 1 1-345 1 1 1456.67890 1 1 156 1 1 1 I suspect that this is as far as you'll get before you abandon IsNumeric in favour of a regex. You can only get part of the way with the LIKE wildcards, because you cannot specify quantifiers. You'll need full-blown Regex strings like these ..[-+]?\b[0-9]+(\.[0-9]+)?\b #INT or REAL[-+]?\b[0-9]{1,3}\b #TINYINT[-+]?\b[0-9]{1,5}\b #SMALLINT.. but you'll get even these to fail to catch numbers out of range.So is IsNumeric() an out and out rogue function? Not really, I'd say, but then it would need a damned good lawyer.

    Read the article

  • jqGrid multi-checkbox custom edittype solution

    - by gsiler
    For those of you trying to understand jqGrid custom edit types ... I created a multi-checkbox form element, and thought I'd share. This was built using version 3.6.4. If anyone has a more efficient solution, please pass it on. Within the colModel, the appropriate edit fields look like this: edittype:'custom' editoptions:{ custom_element:MultiCheckElem, custom_value:MultiCheckVal, list:'Check1,Check2,Check3,Check4' } Here are the javascript functions (BTW, It also works – with some modifications – when the list of checkboxes is in a DIV block): //———————————————————— // Description: // MultiCheckElem is the "custom_element" function that builds the custom multiple check box input // element. From what I have gathered, jqGrid calls this the first time the form is launched. After // that, only the "custom_value" function is called. // // The full list of checkboxes is in the jqGrid "editoptions" section "list" tag (in the options // parameter). //———————————————————— function MultiCheckElem( value, options ) { //———- // for each checkbox in the list // build the input element // set the initial "checked" status // endfor //———- var ctl = ''; var ckboxAry = options.list.split(','); for ( var i in ckboxAry ) { var item = ckboxAry[i]; ctl += '<input type="checkbox" '; if ( value.indexOf(item + '|') != -1 ) ctl += 'checked="checked" '; ctl += 'value="' + item + '"> ' + item + '</input><br />&nbsp;'; } ctl = ctl.replace( /<br />&nbsp;$/, '' ); return ctl; } //———————————————————— // Description: // MultiCheckVal is the "custom_value" function for the custom multiple check box input element. It // appears that jqGrid invokes this function the first time the form is submitted and, the rest of // the time, when the form is launched (action = set) and when it is submitted (action = 'get'). //———————————————————— function MultiCheckVal(elem, action, val) { var items = ''; if (action == 'get') // the form has been submitted { //———- // for each input element // if it's checked, add it to the list of items // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT' && elem[i].checked ) items += elem[i].value + ','; } // items contains a comma delimited list that is returned as the result of the element items = items.replace(/,$/, ''); } else // the form is launched { //———- // for each input element // based on the input value, set the checked status // endfor //———- for (var i in elem) { if (elem[i].tagName == 'INPUT') { if (val.indexOf(elem[i].value + '|') == -1) elem[i].checked = false; else elem[i].checked = true; } } // endfor } return items; }

    Read the article

  • jQuery load Google Visualization API with AJAX

    - by Curro
    Hello. There is an issue that I cannot solve, I've been looking a lot in the internet but found nothing. I have this JavaScript that is used to do an Ajax request by PHP. When the request is done, it calls a function that uses the Google Visualization API to draw an annotatedtimeline to present the data. The script works great without AJAX, if I do everything inline it works great, but when I try to do it with AJAX it doesn't work!!! The error that I get is in the declaration of the "data" DataTable, in the Google Chrome Developer Tools I get a Uncaught TypeError: Cannot read property 'DataTable' of undefined. When the script gets to the error, everything on the page is cleared, it just shows a blank page. So I don't know how to make it work. Please help Thanks in advance $(document).ready(function(){ // Get TIER1Tickets $("#divTendency").addClass("loading"); $.ajax({ type: "POST", url: "getTIER1Tickets.php", data: "", success: function(html){ // Succesful, load visualization API and send data google.load('visualization', '1', {'packages': ['annotatedtimeline']}); google.setOnLoadCallback(drawData(html)); } }); }); function drawData(response){ $("#divTendency").removeClass("loading"); // Data comes from PHP like: <CSV ticket count for each day>*<CSV dates for ticket counts>*<total number of days counted> // So it has to be split first by * then by , var dataArray = response.split("*"); var dataTickets = dataArray[0]; var dataDates = dataArray[1]; var dataCount = dataArray[2]; // The comma separation now splits the ticket counts and the dates var dataTicketArray = dataTickets.split(","); var dataDatesArray = dataDates.split(","); // Visualization data var data = new google.visualization.DataTable(); data.addColumn('date', 'Date'); data.addColumn('number', 'Tickets'); data.addRows(dataCount); var dateSplit = new Array(); for(var i = 0 ; i < dataCount ; i++){ // Separating the data because must be entered as "new Date(YYYY,M,D)" dateSplit = dataDatesArray[i].split("-"); data.setValue(i, 0, new Date(dateSplit[2],dateSplit[1],dateSplit[0])); data.setValue(i, 1, parseInt(dataTicketArray[i])); } var annotatedtimeline = new google.visualization.AnnotatedTimeLine(document.getElementById('divTendency')); annotatedtimeline.draw(data, {displayAnnotations: true}); }

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • Lucene.NET search index approach

    - by Tim Peel
    Hi, I am trying to put together a test case for using Lucene.NET on one of our websites. I'd like to do the following: Index in a single unique id. Index across a comma delimitered string of terms or tags. For example. Item 1: Id = 1 Tags = Something,Separated-Term I will then be structuring the search so I can look for documents against tag i.e. tags:something OR tags:separate-term I need to maintain the exact term value in order to search against it. I have something running, and the search query is being parsed as expected, but I am not seeing any results. Here's some code. My parser (_luceneAnalyzer is passed into my indexing service): var parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_CURRENT, "Tags", _luceneAnalyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); return parser; My Lucene.NET document creation: var doc = new Document(); var id = new Field( "Id", NumericUtils.IntToPrefixCoded(indexObject.id), Field.Store.YES, Field.Index.NOT_ANALYZED, Field.TermVector.NO); var tags = new Field( "Tags", string.Join(",", indexObject.Tags.ToArray()), Field.Store.NO, Field.Index.ANALYZED, Field.TermVector.YES); doc.Add(id); doc.Add(tags); return doc; My search: var parser = BuildQueryParser(); var query = parser.Parse(searchQuery); var searcher = Searcher; TopDocs hits = searcher.Search(query, null, max); IList<SearchResult> result = new List<SearchResult>(); float scoreNorm = 1.0f / hits.GetMaxScore(); for (int i = 0; i < hits.scoreDocs.Length; i++) { float score = hits.scoreDocs[i].score * scoreNorm; result.Add(CreateSearchResult(searcher.Doc(hits.scoreDocs[i].doc), score)); } return result; I have two documents in my index, one with the tag "Something" and one with the tags "Something" and "Separated-Term". It's important for the - to remain in the terms as I want an exact match on the full value. When I search with "tags:Something" I do not get any results. Question What Analyzer should I be using to achieve the search index I am after? Are there any pointers for putting together a search such as this? Why is my current search not returning any results? Many thanks

    Read the article

  • gem install of mongrel

    - by atlantis
    I initiated myself into rails development yesterday. I installed ruby 1.9.1 , rubygems and rails. Running 'gem install mongrel' worked fine and ostensibly installed mongrel too. I am slightly puzzled because script/server starts webrick by default 'which mongrel' returns nothing 'locate mongrel' returns lots of entries like . /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1 /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel . . . /usr/local/bin/mongrel_rails /usr/local/lib/ruby/gems/1.9.1/cache/mongrel-1.1.5.gem /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/evented_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/swiftiplied_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/evented_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/swiftiplied_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5 . . . Does look like I have mongrel installed (both the default installation and my custom install). So why doesn't which mongrel return something. Also trying to reinstall mongrel using 'gem install mongrel' returns throws its own set of exceptions : Building native extensions. This could take a while... ERROR: Error installing mongrel: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install mongrel checking for main() in -lc... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/i386-darwin9.7.0 -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -D_XOPEN_SOURCE=1 -O2 -g -Wall -Wno-parentheses -fno-common -pipe -fno-common -o http11.o -c http11.c http11.c: In function 'http_field': http11.c:77: error: 'struct RString' has no member named 'ptr' http11.c:77: error: 'struct RString' has no member named 'len' http11.c:77: warning: left-hand operand of comma expression has no effect http11.c:77: warning: statement with no effect http11.c: In function 'header_done': http11.c:172: error: 'struct RString' has no member named 'ptr' http11.c:174: error: 'struct RString' has no member named 'ptr' http11.c:176: error: 'struct RString' has no member named 'ptr' http11.c:177: error: 'struct RString' has no member named 'len' http11.c: In function 'HttpParser_execute': http11.c:298: error: 'struct RString' has no member named 'ptr' http11.c:299: error: 'struct RString' has no member named 'len' make: *** [http11.o] Error 1

    Read the article

  • `gem install mongrel` fails with ruby 1.9.1

    - by atlantis
    I initiated myself into rails development yesterday. I installed ruby 1.9.1, rubygems and rails. Running gem install mongrel worked fine and ostensibly installed mongrel too. I am slightly puzzled because: script/server starts webrick by default which mongrel returns nothing locate mongrel returns lots of entries like /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1 /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel . . . /usr/local/bin/mongrel_rails /usr/local/lib/ruby/gems/1.9.1/cache/mongrel-1.1.5.gem /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/evented_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/swiftiplied_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/evented_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/swiftiplied_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5 . . . Does look like I have mongrel installed (both the default installation and my custom install). So why doesn't which mongrel return something. Also trying to reinstall mongrel using gem install mongrel returns throws its own set of exceptions: Building native extensions. This could take a while... ERROR: Error installing mongrel: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install mongrel checking for main() in -lc... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/i386-darwin9.7.0 -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -D_XOPEN_SOURCE=1 -O2 -g -Wall -Wno-parentheses -fno-common -pipe -fno-common -o http11.o -c http11.c http11.c: In function 'http_field': http11.c:77: error: 'struct RString' has no member named 'ptr' http11.c:77: error: 'struct RString' has no member named 'len' http11.c:77: warning: left-hand operand of comma expression has no effect http11.c:77: warning: statement with no effect http11.c: In function 'header_done': http11.c:172: error: 'struct RString' has no member named 'ptr' http11.c:174: error: 'struct RString' has no member named 'ptr' http11.c:176: error: 'struct RString' has no member named 'ptr' http11.c:177: error: 'struct RString' has no member named 'len' http11.c: In function 'HttpParser_execute': http11.c:298: error: 'struct RString' has no member named 'ptr' http11.c:299: error: 'struct RString' has no member named 'len' make: *** [http11.o] Error 1

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • Richfaces modal panel and a4j:keepAlive

    - by mykola
    Hello! I've got unexpected problems with richfaces (3.3.2) modal panel. When i try to open it, browser opens two panels instead of one: one is in the center, another is in the upper left corner. Besides, no fading happens. Also i have three modes: view, edit, new - and when i open my panel it should show either "Create new..." or "Edit..." in the header and actually it shows but not in the header as the latter isn't rendered at all though it should, because i set proper mode in action before opening this modal panel. Besides it works fine on all other pages i've made and there are tens of such pages in my application. I can't understand what's wrong here. The only way to fix it is to remove <a4j:keepAlive/> from the page that is very strange, imho. I'm not sure if code will be usefull here as it works fine everywhere in my application but this only case. So if you put it on your page it will probably work without problems. My only question is: are there any hidden or rare problems in interaction of these two elements (<rich:modalPanel> and <a4j:keepAlive>)? Or shall i spent another two or three days searching for some wrong comma, parenthesis or whatever in my code? :) For most curious. Panel itself: <!-- there's no outer form --> <rich:modalPanel id="panel" autosized="true" minWidth="300" minHeight="200"> <f:facet name="header"> <h:panelGroup id="panelHeader"> <h:outputText value="#{msg.new_smth}" rendered="#{MbSmth.newMode}"/> <h:outputText value="#{msg.edit_smth}" rendered="#{MbSmth.editMode}"/> </h:panelGroup> </f:facet> <h:panelGroup id="panelDiv"> <h:form > <!-- fields and buttons --> </h:form> </h:panelGroup> </rich:modalPanel> One of the buttons that open panel: <a4j:commandButton id="addBtn" reRender="panelHeader, panelDiv" value="#{form.add}" oncomplete="#{rich:component('panel')}.show()" action="#{MbSmth.add}" image="create.gif"/> Action invoked on button click: public void add() { curMode = NEW_MODE; // initial mode is VIEW_MODE newSmth = new Smth(); } Mode check: public boolean isNewMode() { return curMode == NEW_MODE; } public boolean isEditMode() { return curMode == EDIT_MODE; }

    Read the article

  • regex to break a string into "key" / "value" pairs when # of pairs is variable?

    - by user141146
    Hi, I'm using Ruby 1.9 and I'm wondering if there's a simple regex way to do this. I have many strings that look like some variation of this: str = "Allocation: Random, Control: Active Control, Endpoint Classification: Safety Study, Intervention Model: Parallel Assignment, Masking: Double Blind (Subject, Caregiver, Investigator, Outcomes Assessor), Primary Purpose: Treatment" The idea is that I'd like to break this string into its functional components Allocation: Random Control: Active Control Endpoint Classification: Safety Study Intervention Model: Parallel Assignment Masking: Double Blind (Subject, Caregiver, Investigator, Outcomes, Assessor) Primary Purpose: Treatment The "syntax" of the string is that there is a "key" which consists of one or more "words or other characters" (e.g. Intervention Model) followed by a colon (:). Each key has a corresponding "value" (e.g., Parallel Assignment) that immediately follows the colon (:)…The "value" consists of words, commas (whatever), but the end of the "value" is signaled by a comma. The # of key/value pairs is variable. I'm also assuming that colons (:) aren't allowed to be part of the "value" and that commas (,) aren't allowed to be part of the "key". One would think that there is a "regexy" way to break this into its component pieces, but my attempt at making an appropriate matching regex only picks up the first key/value pair and I'm not sure how to capture the others. Any thoughts on how to capture the other matches? regex = /(([^,]+?): ([^:]+?,))+?/ => /(([^,]+?): ([^:]+?,))+?/ irb(main):139:0> str = "Allocation: Random, Control: Active Control, Endpoint Classification: Safety Study, Intervention Model: Parallel Assignment, Masking: Double Blind (Subject, Caregiver, Investigator, Outcomes Assessor), Primary Purpose: Treatment" => "Allocation: Random, Control: Active Control, Endpoint Classification: Safety Study, Intervention Model: Parallel Assignment, Masking: Double Blind (Subject, Caregiver, Investigator, Outcomes Assessor), Primary Purpose: Treatment" irb(main):140:0> str.match regex => #<MatchData "Allocation: Random," 1:"Allocation: Random," 2:"Allocation" 3:" Random,"> irb(main):141:0> $1 => "Allocation: Random," irb(main):142:0> $2 => "Allocation" irb(main):143:0> $3 => " Random," irb(main):144:0> $4 => nil

    Read the article

  • jQuery: modify hidden form field value before submit

    - by Jason Miesionczek
    I have the following code in a partial view (using Spark): <span id="selectCount">0</span> video(s) selected. <for each="var video in Model"> <div style="padding: 3px; margin:2px" class="video_choice" id="${video.YouTubeID}"> <span id="video_name">${video.Name}</span><br/> <for each="var thumb in video.Thumbnails"> <img src="${thumb}" /> </for> </div> </for> # using(Html.BeginForm("YouTubeVideos","Profile", FormMethod.Post, new { id = "youTubeForm" })) # { <input type="hidden" id="video_names" name="video_names" /> <input type="submit" value="add selected"/> # } <ScriptBlock> $(".video_choice").click(function() { $(this).toggleClass('selected'); var count = $(".selected").length; $("#selectCount").html(count); }); var options = { target: '#videos', beforeSubmit: function(arr, form, opts) { var names = []; $(".selected").each(function() { names[names.length] = $(this).attr('id'); }); var namestring = names.join(","); $("#video_names").attr('value',namestring); //alert(namestring); //arr["video_names"] = namestring; //alert($.param(arr)); //alert($("#video_names").attr('value')); return true; } }; $("#youTubeForm").ajaxForm(options); </ScriptBlock> Essentially i display a series of divs that contain information pulled from the YouTube API. I use jQuery to allow the the user to select which videos they would like to add to their profile. When i submit the form i would like to populate the hidden field with a comma separated list of video ids. Everything works except that when i try to set the value of the field, in the controller on post, the field comes back empty. I am using the jQuery ajax form plugin. What am i doing wrong that is not allowing the value i set in the field to be sent to the server?

    Read the article

  • .Net Entity Framework SaveChanges is adding without add method

    - by tmfkmoney
    I'm new to the entity framework and I'm really confused about how savechanges works. There's probably a lot of code in my example which could be improved, but here's the problem I'm having. The user enters a bunch of picks. I make sure the user hasn't already entered those picks. Then I add the picks to the database. var db = new myModel() var predictionArray = ticker.Substring(1).Split(','); // Get rid of the initial comma. var user = Membership.GetUser(); var userId = Convert.ToInt32(user.ProviderUserKey); // Get the member with all his predictions for today. var memberQuery = (from member in db.Members where member.user_id == userId select new { member, predictions = from p in member.Predictions where p.start_date == null select p }).First(); // Load all the company ids. foreach (var prediction in memberQuery.predictions) { prediction.CompanyReference.Load(); } var picks = from prediction in predictionArray let data = prediction.Split(':') let companyTicker = data[0] where !(from i in memberQuery.predictions select i.Company.ticker).Contains(companyTicker) select new Prediction { Member = memberQuery.member, Company = db.Companies.Where(c => c.ticker == companyTicker).First(), is_up = data[1] == "up", // This turns up and down into true and false. }; // Save the records to the database. // HERE'S THE PART I DON'T UNDERSTAND. // This saves the records, even though I don't have db.AddToPredictions(pick) foreach (var pick in picks) { db.SaveChanges(); } // This does not save records when the db.SaveChanges outside of a loop of picks. db.SaveChanges(); foreach (var pick in picks) { } // This saves records, but it will insert all the picks exactly once no matter how many picks you have. //The fact you're skipping a pick makes no difference in what gets inserted. var counter = 1; foreach (var pick in picks) { if (counter == 2) { db.SaveChanges(); } counter++; } There's obviously something going on with the context I don't understand. I'm guessing I've somehow loaded my new picks as pending changes, but even if that's true I don't understand I have to loop over them to save changes. Can someone explain this to me?

    Read the article

  • How to implement a tagging plugin for jQuery

    - by anxiety
    Goal: To implement a jQuery plugin for my rails app (or write one myself, if necessary) that creates a "box" around text after a delimiter is typed. Example: With tagging on SO, the user begins typing a tag, then selects one from the drop-down list provided. The input field recognizes that a tag has been selected, puts a space and then is ready for the next tag. Similarly, I am attempting to use this plugin to put a box around the previously entered tag before moving to to accept the next tag/input. The instructions in the README.txt seem simple enough, however I have been receiving a $(".tagbox").tagbox is not a function error when debugging my app with firebug. Here is what I have in my application.js: $(document).ready( function(){ $('.tagbox').tagbox({ separator: /\[,]/, // specifying comma separation for <code>tags</code> }); }); And here is my _form.html.erb: <% form_for @tag do |f| %> <%= f.error_messages %> <p> <%= f.label :name %><br /> <%= text_field :tag, :name, { :method => :get, :class => "tagbox" } %> </p> <p><%= f.submit "Submit" %></p> <% end %> I have omitted some other code (namely the implementation of an autocomplete plugin) existing within my _form.html.erb and application.js for sake of readability. The inclusion or exclusion of this omitted code does not affect the performance of this plugin. I have included all of the necessary files for the tagbox plugin (as well as application.js after all other included JS files) within the javascript_include_tag in my application.html.erb file. I'm pretty much confused as to why I'd be getting this "not a function" error when jquery.tagbox.js clearly defines the function and is included in the head of my html page in question. I've been struggling with this plugin for longer than I'd like to admit, so any help would really be appreciated. And, of course, I'm open to any other plugins or from-scratch suggestions you may have in mind.. This tagbox plugin does not seem to have a wealth of documentation or any currently working examples. Also to note, I'm trying to avoid using jrails. Thanks for your time

    Read the article

  • Google Chart Number formatting

    - by MizAkita
    I need to format my pie and column charts to show the $ and comma in currency format ($###,###) when you hover over the charts. Right now, it is displaying the number and percentage but the number as #####.## here is my code. Any help would be appreciated. // Load the Visualization API and the piechart package. google.load('visualization', '1.0', {'packages':['corechart']}); // Set a callback to run when the Google Visualization API is loaded. google.setOnLoadCallback(drawChart); var formatter = new google.visualization.NumberFormat({ prefix: '$' }); formatter.format(data, 1); var options = { pieSliceText: 'value' }; // Callback that creates and populates a data table, // instantiates the pie chart, passes in the data and // draws it. function drawChart() { // REVENUE CHART - Create the data table. var data4 = new google.visualization.DataTable(); data4.addColumn('string', 'Status'); data4.addColumn('number', 'In Thousands'); data4.addRows([ ['Net tution & Fees', 213.818], ['Auxiliaries', 30.577], ['Government grants/contracts', 39.436], ['Private grants/gifts', 39.436], ['Investments', 10.083], ['Clinics', 14.353], ['Other', 5.337] ]); // EXPENSES CHART - Create the data table. var data5 = new google.visualization.DataTable(); data5.addColumn('string', 'Status'); data5.addColumn('number', 'Amount'); data5.addRows([ ['Instruction', 133.868], ['Sponsored Progams', 34.940], ['Auxiliaries', 30.064], ['Academic Support', 25.529], ['Depreciation & amortization', 18.548], ['Student Services', 22.626], ['Plant operations & maintenance', 18.105], ['Fundraising', 13.258], ['Geneal Administration', 11.628], ['Interest', 6.846], ['Student Aid', 1.886], ]); // ENDOWMENT CHART - Create the data table. var data6 = new google.visualization.DataTable(); data6.addColumn('string', 'Status'); data6.addColumn('number', 'In Millions'); data6.addRows([ ['2010', 178.7], ['2011', 211.693], ['2012', 199.3] ]); // Set REVENUE chart options var options4 = { is3D: true, fontName: 'Arial', colors:['#AFD8F8', '#F6BD0F', '#8BBA00', '#FF8E46', '#008E8E', '#CCCCCC', '#D64646', '#8E468E'], 'title':'', 'width':550, 'height':250}; // Set EXPENSES chart options var options5 = { is3D: true, fontName: 'Arial', colors:['#AFD8F8', '#F6BD0F', '#8BBA00', '#FF8E46', '#008E8E', '#CCCCCC', '#D64646', '#8E468E'], 'title':'', 'width':550, 'height':250}; // Set ENDOWMENT chart options var options6 = { is3D: true, fontName: 'Arial', colors:['#AFD8F8', '#F6BD0F', '#8BBA00', '#FF8E46', '#008E8E', '#CCCCCC', '#D64646', '#8E468E'], 'title':'', 'width':450, 'height':250}; // Instantiate and draw our chart, passing in some options. var chart4 = new google.visualization.PieChart(document.getElementById('chart_div4')); chart4.draw(data4, options4); var chart5 = new google.visualization.PieChart(document.getElementById('chart_div5')); chart5.draw(data5, options5); var chart6 = new google.visualization.ColumnChart(document.getElementById('chart_div6')); chart6.draw(data6, options6);}

    Read the article

  • Modify a php limit text function adding some kind of offset to it

    - by webmasters
    Maybe you guys can help: I have a variable called $bio with bio data. $bio = "Hello, I am John, I'm 25, I like fast cars and boats. I work as a blogger and I'm way cooler then the author of the question"; I search the $bio using a set of functions to search for a certain word, lets say "author" which adds a span class around that word, and I get: $bio = "Hello, I am John, I'm 25, I like fast cars and boats. I work as a blogger and I'm way cooler then the <span class=\"highlight\">author</span> of the question"; I use a function to limit the text to 85 chars: $bio = limit_text($bio,85); The problem is when there are more then 80 chars before the word "author" in $bio. When the limit_text() is applied, I won't see the highlighted word author. What I need is for the limit_text() function to work as normal, adding all the words that contain the span class highlight at the end. Something like this: *"This is the limited text to 85 chars, but there are no words with the span class highlight so I am putting to be continued ... **author**, **author2** (and all the other words that have a span class highlight around them separate by comma "* Hope you understood what I mean, if not, please comment and I'll try to explain better. Here is my limit_text() function: function limit_text($text, $length){ // Limit Text if(strlen($text) > $length) { $stringCut = substr($text, 0, $length); $text = substr($stringCut, 0, strrpos($stringCut, ' ')); } return $text; } UPDATE: $xturnons = str_replace(",", ", ", $xturnons); $xbio = str_replace(",", ", ", $xbio); $xbio = customHighlights($xbio,$toHighlight); $xturnons = customHighlights($xturnons,$toHighlight); $xbio = limit_text($xbio,85); $xturnons = limit_text($xturnons,85); The customHighlights function which adds the span class highlighted: function addRegEx($word){ // Highlight Words return "/" . $word . '[^ ,\,,.,?,\.]*/i'; } function highlight($word){ return "<span class='highlighted'>".$word[0]."</span>"; } function customHighlights($searchString,$toHighlight){ $searchFor = array_map('addRegEx',$toHighlight); $result = preg_replace_callback($searchFor,'highlight',$searchString); return $result; }

    Read the article

  • .live event doesnt work till second click

    - by ChampionChris
    I have 2 list on a page that are linked. When I drag a li element from list 1 to list 2 the live events on list 1 don't work on the first click only second click. Below is the code that adds the li (obj) to list 2. function AddToDropBox(obj) { $(obj).children(".handle").animate({ width: "20px" }).children("strong").fadeOut(); $(obj).children("span:not(.track,.play,.handle,:has(.btn-edit))").fadeOut('fast'); $(obj).children(".play").css("margin-right", "8px"); $(obj).css({ "opacity": "0.0", "width": "284px" }).animate({ opacity: "1.0" }); if ($(".sidebar-drop-box ul").children(".admin-song").length > 0) { $(".dropTitle").fadeOut("fast"); $(".sidebar-drop-box ul.admin-song-list").css("min-height", "0"); } if (typeof SetLinks == 'function') { SetLinks(); } //CBG Changes adds media ID to hidden field //checks id there is a value in field then adds comma if(document.getElementById("ctl00_cphBody_hfRemoveMedia").value==""||document.getElementById("ctl00_cphBody_hfRemoveMedia").value==null) { document.getElementById("ctl00_cphBody_hfRemoveMedia").value=(obj).attr("mediaid"); } else { var localMediaIDs=document.getElementById("ctl00_cphBody_hfRemoveMedia").value; document.getElementById("ctl00_cphBody_hfRemoveMedia").value=localMediaIDs+", "+(obj).attr("mediaid"); } // alert("hfid: "+document.getElementById("ctl00_cphBody_hfRemoveMedia").value); //END CBG Modifications } this is one of the live() events that dont fire until the second click after the drag. This live() event is in a document.ready function(). // Live for deleting. $(".btn-del").live("click", function(e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; }); EDIT @dreaton.. Im new to jquery and javascript so thanks for the last tip... Im not sure what you mean about cache the query's. ... the delegete feature is giving me this Microsoft JScript runtime error: Object doesn't support this property or method this is the way I have the code $('#ulPlaylist').delegate('.btn-del', 'click', function (e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; });

    Read the article

  • MODX parse error function implode (is it me or modx?)

    - by Ian
    Hi, I cannot for the life of me figure this out, maybe someone can help. Using MODX a form takes user criteria to create a filter and return a list of documents. The form is one text field and a few checkboxes. If both text field and checkbox data is posted, the function works fine; if just the checkbox data is posted the function works fine; but if just the text field data is posted, modx gives me the following error: Error: implode() [function.implode]: Invalid arguments passed. I've tested this outside of modx with flat files and it all works fine leading me to assume a bug exists within modx. But I'm not convinced. Here's my code: <?php $order = array('price ASC'); //default sort order if(!empty($_POST['tour_finder_duration'])){ //duration submitted $days = htmlentities($_POST['tour_finder_duration']); //clean up post array_unshift($order,"duration DESC"); //add duration sort before default $filter[] = 'duration,'.$days.',4'; //add duration to filter[] (field,criterion,mode) $criteria[] = 'Number of days: <strong>'.$days.'</strong>'; //displayed on results page } if(!empty($_POST['tour_finder_dests'])){ //destination/s submitted $dests = $_POST['tour_finder_dests']; foreach($dests as $value){ //iterate through dests array $filter[] = 'searchDests,'.htmlentities($value).',7'; //add dests to filter[] $params['docid'] = $value; $params['field'] = 'pagetitle'; $pagetitle = $modx->runSnippet('GetField',$params); $dests_array[] = '<a href="[~'.$value.'~]" title="Read more about '.$pagetitle.'" class="tourdestlink">'.$pagetitle.'</a>'; } $dests_array = implode(', ',$dests_array); $criteria[] = 'Destinations: '.$dests_array; //displayed on results page } if(is_array($filter)){ $filter = implode('|',$filter);//pipe-separated string } if(is_array($order)){ $order = implode(',',$order);//comma-separated string } if(is_array($criteria)){ $criteria = implode('<br />',$criteria); } echo '<br />Order: '.$order.'<br /> Filter: '.$filter.'<br /> Criteria: '.$criteria; //next: extract docs using $filter and $order, display user's criteria using $criteria... ?> The echo statement is displayed above the MODX error message and the $filter array is correctly imploded. Any help will save my computer from flying out the window. Thanks

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36  | Next Page >