Search Results

Search found 1649 results on 66 pages for 'unicode normalization'.

Page 40/66 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Need assistance matching a general theme style as well as eCommerce capability

    - by humble_coder
    I'm in the process of acquiring a new design client. They are getting into the business of "auto parts wholesaling" and they want a storefront. My preference is/was to create something from scratch. However, here is an established trend in their particular market (similar parts, layout, etc). They insist on following the existing visual trend, as per the following: http://www.xtremediesel.com/ http://www.thoroughbreddiesel.com/ http://www.alligatorperformance.com/ My plan of attack at this point is to find a comparable WP theme and a flexible (but useful) backend/product management. Their current demo site (which their previous developer made a stab at) is using Pinnacle Cart. It is no where near what they need, nor is it intuitive to work with. I was actually considering Magento for its greater abilities but I'm still considering options. That said, my two primary dilemmas are as follows: 1) I need a theme that mimics the general style of those listed. They explicitly said they didn't want anything too clean (e.g. ThemeForest, Woothemes) as it "wasn't rugged or busy looking enough" for their field. 2) I need a WP/Magento/WP e-Commerce (or any one of a host of other) plugin that will allow for bulk import/update of nearly 200,000 products, descriptions and images. I'm not opposed to manually interfacing with the DB for import, but in the end, I need a store/system that doesn't needlessly add 50 tables to accommodate some "wet behind the ears" concept of table normalization and is easy to add to. Anyway, if anyone has any quality suggestions regarding either of these issues, it would be most appreciated. Best.

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • EE vs Computer Science: Effect on Developers' Approaches, Styles?

    - by DarenW
    Are there any systematic differences between software developers (sw engineers, architect, whatever job title) with an electronics or other engineering background, compared to those who entered the profession through computer science? By electronics background, I mean an EE degree, or a self-taught electronics tinkerer, other types of engineers and experimental physicists. I'm wondering if coming into the software-making professions from a strong knowledge of flip flops, tristate buffers, clock edge rise times and so forth, usually leads to a distinct approach to problems, mindsets, or superior skills at certain specialties and lack of skills at others, when compared to the computer science types who are full of concepts like abstract data types, object orientation, database normalization, who speak of "closures" in programming languages - things that make little sense to the soldering iron crowd until they learn enough programming. The real world, I'm sure, offers a wild range of individual exceptions, but for the most part, can you say there are overall differences? Would these have hiring implications e.g. (to make up something) "never hire an electron wrangler to do database design"? Could knowing about any differences help job seekers find something appropriate more effectively? Or provide enlightenment or some practical advice for those who find themselves misfits in a particular job role? (Btw, I've never taken any computer science classes; my impression of exactly what they cover is fuzzy. I'm an electronics/physics/art type, myself.)

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • round off and displaying the values

    - by S.PRATHIBA
    Hi all, I have the following code: import java.io.; import java.sql.; import java.math.; import java.lang.; public class Testd1{ public static void main(String[] args) { System.out.println("Sum of the specific column!"); Connection con = null; int m=1; double sum,sum1,sum2; int e[]; e=new int[100]; int p; int decimalPlaces = 5; for( int i=0;i< e.length;i++) { e[i]=0; } double b2,c2,d2,u2,v2; int i,j,k,x,y; double mat[][]=new double[10][10]; try { Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection ("jdbc:mysql://localhost:3306/prathi","root","mysql"); try{ Statement st = con.createStatement(); ResultSet res = st.executeQuery("SELECT Service_ID,SUM(consumer_feedback) FROM consumer1 group by Service_ID"); while (res.next()){ int data=res.getInt(1); System.out.println(data); System.out.println("\n\n"); int c1 = res.getInt(2); e[m]=res.getInt(2); if(e[m]<0) e[m]=0; m++; System.out.print(c1); System.out.println("\t\t"); } sum=e[1]+e[2]+e[3]+e[4]+e[5]; System.out.println("\n \n The sum is" +sum); for( p=21; p<=25; p++) { if(e[p] != 0) e[p]=e[p]/(int)sum; //I have type casted sum to get output BigDecimal bd1 = new BigDecimal(e[p]); bd1 = bd1.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP); // setScale is immutable e[p] = bd1.intValue(); System.out.println("\n\n The normalized value is" +e[p]); mat[4][p-21]=e[p]; } } catch (SQLException s){ System.out.println("SQL statement is not executed!"); } } catch (Exception e1){ e1.printStackTrace(); } } } I have a table named consumer1.After calculating the sum i am getting the values as follows mysql select Service_ID,sum(consumer_feedback) from consumer1 group by Service_ ID; Service_ID sum(consumer_feedback) 31 17 32 0 33 60 34 38 35 | 38 In my program I am getting the sum for each Service_ID correctly.But,after normalization ie while I am calculating 17/153=0.111 I am getting the normalized value is 0.I want the normalized values to be displayed correctly after rounding off.My output is as follows C:javac Testd1.java C:java Testd1 Sum of the specific column! 31 17 32 0 33 60 34 38 35 38 The sum is153.0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 The normalized value is0 But,after normalization i want to get 17/153=0.111 I am getting the normalized value is 0.I want these values to be rounded off.

    Read the article

  • Ado.Net Entity produces "namespace cannot be found"

    - by Dave
    I've seen several possible solutions to this, but none have worked for me. After adding a ADO.NET Entity Data Model to my .Net Forms C# web project, I am unable to use it. Perhaps I made a mistake adding it? The name of the file added is QcFormData.edmx. In my code, perhaps I'm instantiating it incorrectly? I tried adding the line: QcFormDataContainer db = new QcFormDataContainer(); It appears in Intellisense, but when compiling I get the error : Error 13 The type or namespace name 'QcFormDataContainer' could not be found (are you missing a using directive or an assembly reference?) I've followed the suggestions that I found online that did not help: 1) made sure there is "using System.Data.Entity" 2) made sure the dll exists. 3) made sure the reference exists. 4) one post said use using System.Web.Data.Entity; but I do not see that available. What am I missing? QcFormData.edmx <?xml version="1.0" encoding="utf-8"?> <edmx:Edmx Version="3.0" xmlns:edmx="http://schemas.microsoft.com/ado/2009/11/edmx"> <!-- EF Runtime content --> <edmx:Runtime> <!-- SSDL content --> <edmx:StorageModels> <Schema Namespace="MyCocoModel.Store" Alias="Self" Provider="System.Data.SqlClient" ProviderManifestToken="2008" xmlns:store="http://schemas.microsoft.com/ado/2007/12/edm/EntityStoreSchemaGenerator" xmlns="http://schemas.microsoft.com/ado/2009/11/edm/ssdl"> <EntityContainer Name="MyCocoModelStoreContainer"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.Store.QcFieldValues" store:Type="Tables" Schema="dbo" /> </EntityContainer> <EntityType Name="QcFieldValues"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="int" Nullable="false" StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="nvarchar" MaxLength="100" /> <Property Name="FieldValue" Type="nvarchar" MaxLength="100" /> <Property Name="DateTimeAdded" Type="datetime" /> <Property Name="OrderReserveNumber" Type="nvarchar" MaxLength="50" /> </EntityType> </Schema> </edmx:StorageModels> <!-- CSDL content --> <edmx:ConceptualModels> <Schema Namespace="MyCocoModel" Alias="Self" p1:UseStrongSpatialTypes="false" xmlns:annotation="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns:p1="http://schemas.microsoft.com/ado/2009/02/edm/annotation" xmlns="http://schemas.microsoft.com/ado/2009/11/edm"> <EntityContainer Name="MyCocoEntities" p1:LazyLoadingEnabled="true"> <EntitySet Name="QcFieldValues" EntityType="MyCocoModel.QcFieldValue" /> </EntityContainer> <EntityType Name="QcFieldValue"> <Key> <PropertyRef Name="ID" /> </Key> <Property Name="ID" Type="Int32" Nullable="false" p1:StoreGeneratedPattern="Identity" /> <Property Name="FieldID" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="FieldValue" Type="String" MaxLength="100" Unicode="true" FixedLength="false" /> <Property Name="DateTimeAdded" Type="DateTime" Precision="3" /> <Property Name="OrderReserveNumber" Type="String" MaxLength="50" Unicode="true" FixedLength="false" /> </EntityType> </Schema> </edmx:ConceptualModels> <!-- C-S mapping content --> <edmx:Mappings> <Mapping Space="C-S" xmlns="http://schemas.microsoft.com/ado/2009/11/mapping/cs"> <EntityContainerMapping StorageEntityContainer="MyCocoModelStoreContainer" CdmEntityContainer="MyCocoEntities"> <EntitySetMapping Name="QcFieldValues"> <EntityTypeMapping TypeName="MyCocoModel.QcFieldValue"> <MappingFragment StoreEntitySet="QcFieldValues"> <ScalarProperty Name="ID" ColumnName="ID" /> <ScalarProperty Name="FieldID" ColumnName="FieldID" /> <ScalarProperty Name="FieldValue" ColumnName="FieldValue" /> <ScalarProperty Name="DateTimeAdded" ColumnName="DateTimeAdded" /> <ScalarProperty Name="OrderReserveNumber" ColumnName="OrderReserveNumber" /> </MappingFragment> </EntityTypeMapping> </EntitySetMapping> </EntityContainerMapping> </Mapping> </edmx:Mappings> </edmx:Runtime> <!-- EF Designer content (DO NOT EDIT MANUALLY BELOW HERE) --> <Designer xmlns="http://schemas.microsoft.com/ado/2009/11/edmx"> <Connection> <DesignerInfoPropertySet> <DesignerProperty Name="MetadataArtifactProcessing" Value="EmbedInOutputAssembly" /> </DesignerInfoPropertySet> </Connection> <Options> <DesignerInfoPropertySet> <DesignerProperty Name="ValidateOnBuild" Value="true" /> <DesignerProperty Name="EnablePluralization" Value="True" /> <DesignerProperty Name="IncludeForeignKeysInModel" Value="True" /> <DesignerProperty Name="CodeGenerationStrategy" Value="None" /> </DesignerInfoPropertySet> </Options> <!-- Diagram content (shape and connector positions) --> <Diagrams></Diagrams> </Designer> </edmx:Edmx>

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>” This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks” /> By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH".

    Read the article

  • Oracle Database Appliance Now Certified by SAP

    - by Bandari Huang
    All SAP products based on SAP NetWeaver 7.x that are also certified for Oracle Database 11g Release 2 (single node or RAC) can now be used with the Oracle Database Appliance. RAC One Node is NOT supported. Only Three-Tier SAP Installations. Only the Oracle database can run on the Oracle Database Appliance.  No SAP instance can be deployed on the Oracle Database Appliance. SAP instances have to run on different middle-tier machines of any hardware architecture and operating system. Central Services (ASCS and/or SCS) can be configured to run on the Oracle Database Appliance for Unicode installations of SAP. SAP BR*Tools support is now available for the Oracle Database Appliance. For more information about SAP on ODA, please refer: Using SAP NetWeaver with the Oracle Database Appliance New Nov2012 Note 1760737 - SAP Software and Oracle Database Appliance (ODA) Note 1785353 - ODA 11.2.0: Patches for 11.2.0.3  

    Read the article

  • How do I select a subset of the available fonts for a particular application

    - by Aleve Sicofante
    Having all those exotic fonts (for an European), like Chinese, Hindi or Russian fonts, is nice for a web browser. You never get those ugly unicode blocks and get the original glyphs instead. However, having the font menu in LibreOffice or AbiWord populated with all of those fonts is cumbersome and useless for most installations. Having more than a few fonts in note taking applications is also somewhat overkill. Is there a way I can designate a subset of all the available fonts to work with a particular application? I understand the app itself could do it, but I'm asking for a way to make LibreOffice, for instance, not see certain fonts, only my selection of "useful for text processing" subset.

    Read the article

  • A command-line clipboard copy and paste utility?

    - by Peter.O
    In Windows I used command-line clipboard copy-and-paste utilities... pclip.exe and gclip.exe These were UnixUtils ports for Windows (but they only handled plain text). There were a couple of other native Windows utils which could write/extracy any format. I've looked for something similar in Synaptic Package Manager, but I can't find anything. Is there something there, that I've missed? ... or maybe this is available in bash scripting? The type of utility I'd like will be able to read/write via std-in/std-out or file-in/file-out, and handle Unicode/Rich-text/Picture/etc clipboard formats... Late Edit: NB: I'm not after a clipboard manager.

    Read the article

  • EPM 11.1.2 - Issues during configuration when using Oracle DB if not using UTF8

    - by Ahmed A
    If you see issues during configuration when using Oracle DB if not using UTF8: Workaround: a. During configuration of EPM products, a warning message is displayed if the Oracle DB is not UTF8 enabled. If you continue with the configuration, certain products will not work as they will not be able to read the contents in the tables as the format is wrong.b. The Oracle DB must be setup to use AL32UTF8 or a superset that contains AL32UTF8. c. The only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.

    Read the article

  • When attempting to install ubuntu 12.04 from CD, I am stuck on black streen with "loading bootlogo..."

    - by Jessica K
    I downloaded Ubuntu 12.04 to my desktop and burned to a CD using Infra Recorder and instructions on ubuntu website. Restarted PC to boot from CD receive black screen with "Loading bootlogo..." then nothing happens and I have to restart with windows. The CD seems to be correct. Folders include .disk, boot, casper, dists, install, isolinux, pics, pool, preseed, autorun file, md5sum text file, readme.diskdefines file, wubi app. System Information Operating System: Windows Vista™ Home Premium (6.0, Build 6002) Service Pack 2 (6002.vistasp2_gdr.120824-0336) System Manufacturer: TOSHIBA System Model: Satellite L305 BIOS: Default System BIOS Processor: Intel(R) Pentium(R) Dual CPU T2390 @ 1.86GHz (2 CPUs), ~1.9GHz Memory: 3062MB RAM Page File: 1553MB used, 4772MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found DxDiag Version: 7.00.6002.18107 32bit Unicode Drive: D: Model: PIONEER DVD-RW DVRKD08L ATA Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.00.6002.18005 (English), 4/11/2009 00:39:17, 67072 bytes

    Read the article

  • What is use of universal character names in identifiers in C++

    - by Jan Hudec
    The C++ standard (I noticed it in the new one, but it did already exist in C++03) specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it? Edit: Since it actually existed in C++03 already, additional question would be whether you actually saw a code that used it?

    Read the article

  • How Hackers Can Disguise Malicious Programs With Fake File Extensions

    - by Chris Hoffman
    File extensions can be faked – that file with an .mp3 extension may actually be an executable program. Hackers can fake file extensions by abusing a special Unicode character, forcing text to be displayed in reverse order. Windows also hides file extensions by default, which is another way novice users can be deceived – a file with a name like picture.jpg.exe will appear as a harmless JPEG image file. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • SFML title bar with weird characters when using UTF-8

    - by TheOm3ga
    (Previously asked at http://stackoverflow.com/questions/4922478/sfml-title-bar-with-weird-characters-when-using-utf-8) I've just started using SFML and one of the first problems I've come across is some weird characters on the the titlebar whenever I try to use accents or any other extended char. For instance, I've got: sf::RenderWindow Ventana(sf::VideoMode(800, 600, 32), "Año nuevóóó"); And the titlebar renders like AÂ+o nuevoA³A³A³ This ONLY HAPPENS if my source code file is enconded in UTF-8. If I change the file encoding to ISO-8859-1, it shows properly. Obviously all of my files use UTF-8, as its the system-wide encoding. I'm using GCC under Ubuntu GNU/Linux. I've tried using the different utilities in sf::Unicode to adapt the text, but none of them seems to work.

    Read the article

  • Using HTML entities in XML Literals (Avner Aharoni)

    One of the common use-cases of XML literals is creating HTML. However, HTML entities cannot be used in XML literals since LINQ to XML directly supports only the Data type definitions (DTD) defined in the XML 1.0 spec. You can read more about it here. The workaround is to use the Unicode representation of the entity, although its not as readable as the HTML entities, the output is the same. Here are two examples of HTML entities from the XHTML spec :   Entity...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using HTML entities in XML Literals (Avner Aharoni)

    One of the common use-cases of XML literals is creating HTML. However, HTML entities cannot be used in XML literals since LINQ to XML directly supports only the Data type definitions (DTD) defined in the XML 1.0 spec. You can read more about it here. The workaround is to use the Unicode representation of the entity, although its not as readable as the HTML entities, the output is the same. Here are two examples of HTML entities from the XHTML spec :   Entity...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What is use of universal character names in identifiers in C++11

    - by Jan Hudec
    The new C++ standard specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it?

    Read the article

  • XRegExp Regular Expression Library for JavaScript

    - by Jan Goyvaerts
    XRegExp is an open source JavaScript library. It extends JavaScript’s regex syntax with features such as free-spacing, named capture, mode modifiers, and Unicode categories, blocks, and scripts. It also provides its own test(), exec(), forEach(), replace(), and split() methods that eliminate most cross-browser inconstencies and bugs. Anyone using non-trivial regexes in their JavaScript code should seriously consider using XRegExp. Last month’s update of the Regular-Expressions.info website added full coverage of XRegExp to the regex tutorial and regex reference sections. But the tools & languages section was missing the XRegExp page, resulting in broken links in the tutorial and reference sections. This page has now been added.

    Read the article

  • Why does 'quickly package' fail with "An error has occurred when creating debian packaging"?

    - by slashcrack
    I've got a big problem with packing my quickly application for the Ubuntu App Showdown. When I try to package or submit my application, I get some warnings: quickly package ........ ---------------------------------- WARNING: syntax errors in facebook/FacebookWindow.py: encoding declaration in Unicode string (FacebookWindow.py, line 0) WARNING: the following files are not recognized by DistUtilsExtra.auto: AUTHORS~ facebook/AboutFacebookDialog.py~ facebook/FacebookWindow.py~ facebook/PreferencesFacebookDialog.py~ facebook/__init__.py~ facebook_lib/AboutDialog.py~ facebook_lib/Builder.py~ facebook_lib/PreferencesDialog.py~ facebook_lib/Window.py~ facebook_lib/__init__.py~ facebook_lib/facebookconfig.py~ facebook_lib/helpers.py~ setup.py~ ---------------------------------- An error has occurred when creating debian packaging ERROR: can't create or update ubuntu package ERROR: package command failed Aborting What does the second warning mean? How do I solve those warnings? I want to submit my app to the Ubuntu App developer Showdown into my PPA and it doesn't work. Thanks for any answer.

    Read the article

  • IsNumeric() Broken? Only up to a point.

    - by Phil Factor
    In SQL Server, probably the best-known 'broken' function is poor ISNUMERIC() . The documentation says 'ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0. ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).'Although it will take numeric data types (No, I don't understand why either), its main use is supposed to be to test strings to make sure that you can convert them to whatever numeric datatype you are using (int, numeric, bigint, money, smallint, smallmoney, tinyint, float, decimal, or real). It wouldn't actually be of much use anyway, since each datatype has different rules. You actually need a RegEx to do a reasonably safe check. The other snag is that the IsNumeric() function  is a bit broken. SELECT ISNUMERIC(',')This cheerfully returns 1, since it believes that a comma is a currency symbol (not a thousands-separator) and you meant to say 0, in this strange currency.  However, SELECT ISNUMERIC(N'£')isn't recognized as currency.  '+' and  '-' is seen to be numeric, which is stretching it a bit. You'll see that what it allows isn't really broken except that it doesn't recognize Unicode currency symbols: It just tells you that one numeric type is likely to accept the string if you do an explicit conversion to it using the string. Both these work fine, so poor IsNumeric has to follow suit. SELECT  CAST('0E0' AS FLOAT)SELECT  CAST (',' AS MONEY) but it is harder to predict which data type will accept a '+' sign. SELECT  CAST ('+' AS money) --0.00SELECT  CAST ('+' AS INT)   --0SELECT  CAST ('+' AS numeric)/* Msg 8115, Level 16, State 6, Line 4 Arithmetic overflow error converting varchar to data type numeric.*/SELECT  CAST ('+' AS FLOAT)/*Msg 8114, Level 16, State 5, Line 5Error converting data type varchar to float.*/> So we can begin to say that the maybe IsNumeric isn't really broken, but is answering a silly question 'Is there some numeric datatype to which i can convert this string? Almost, but not quite. The bug is that it doesn't understand Unicode currency characters such as the euro or franc which are actually valid when used in the CAST function. (perhaps they're delaying fixing the euro bug just in case it isn't necessary).SELECT ISNUMERIC (N'?23.67') --0SELECT  CAST (N'?23.67' AS money) --23.67SELECT ISNUMERIC (N'£100.20') --1SELECT  CAST (N'£100.20' AS money) --100.20 Also the CAST function itself is quirky in that it cannot convert perfectly reasonable string-representations of integers into integersSELECT ISNUMERIC('200,000')       --1SELECT  CAST ('200,000' AS INT)   --0/*Msg 245, Level 16, State 1, Line 2Conversion failed when converting the varchar value '200,000' to data type int.*/  A more sensible question is 'Is this an integer or decimal number'. This cuts out a lot of the apparent quirkiness. We do this by the '+E0' trick. If we want to include floats in the check, we'll need to make it a bit more complicated. Here is a small test-rig. SELECT  PossibleNumber,         ISNUMERIC(CAST(PossibleNumber AS NVARCHAR(20)) + 'E+00') AS Hack,        ISNUMERIC (PossibleNumber + CASE WHEN PossibleNumber LIKE '%E%'                                          THEN '' ELSE 'E+00' END) AS Hackier,        ISNUMERIC(PossibleNumber) AS RawIsNumericFROM    (SELECT CAST(',' AS NVARCHAR(10)) AS PossibleNumber          UNION SELECT '£' UNION SELECT '.'         UNION SELECT '56' UNION SELECT '456.67890'         UNION SELECT '0E0' UNION SELECT '-'         UNION SELECT '-' UNION SELECT '.'         UNION  SELECT N'?' UNION SELECT N'¢'        UNION  SELECT N'?' UNION SELECT N'?34.56'         UNION SELECT '-345' UNION SELECT '3.332228E+09') AS examples Which gives the result ... PossibleNumber Hack Hackier RawIsNumeric-------------- ----------- ----------- ------------? 0 0 0- 0 0 1, 0 0 1. 0 0 1¢ 0 0 1£ 0 0 1? 0 0 0?34.56 0 0 00E0 0 1 13.332228E+09 0 1 1-345 1 1 1456.67890 1 1 156 1 1 1 I suspect that this is as far as you'll get before you abandon IsNumeric in favour of a regex. You can only get part of the way with the LIKE wildcards, because you cannot specify quantifiers. You'll need full-blown Regex strings like these ..[-+]?\b[0-9]+(\.[0-9]+)?\b #INT or REAL[-+]?\b[0-9]{1,3}\b #TINYINT[-+]?\b[0-9]{1,5}\b #SMALLINT.. but you'll get even these to fail to catch numbers out of range.So is IsNumeric() an out and out rogue function? Not really, I'd say, but then it would need a damned good lawyer.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >