Search Results

Search found 5637 results on 226 pages for 'triple slash comments'.

Page 46/226 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • StyleCop XML Documentation Header - Using 3 /// instead of 2 //

    - by Adam Jenkin
    I am using XML documentation headers on my c# files to pass the StyleCop rule SA1633. Currently, I have to use the 2 slash commenting rule to allow StyleCop to recognize the header. for example: // <copyright file="abc.ascx.cs" company="MyCompany.com"> // MyCompany.com. All rights reserved. // </copyright> // <author>Me</author> This works fine for StyleCop, however I would like to use the 3 slash commenting rule to enable visual studio to understand the comments as XML and provide the XML functionality (highlighting, auto indenting etc) /// <copyright file="abc.ascx.cs" company="MyCompany.com"> /// MyCompany.com. All rights reserved. /// </copyright> /// <author>Me</author> The problem is that when using 3 slashes, StyleCop no longer see's the header and throws the SA1633 warning. Is there anyway to configure stylecop to understand the header is contained in XML using 3 slashes? Thanks, Adam

    Read the article

  • Where does Tomcat append / to directory paths?

    - by Anonymouse
    Suppose my Tomcat webapps directory looks like this: webapps/ webapps/fooapp/ webapps/fooapp/WEB-INF/ webapps/fooapp/WEB-INF/web.xml webapps/fooapp/bardir/ When I make a GET request for /fooapp/bardir, Tomcat sees that webapps/fooapp/bardir is a directory and sends back a 302 to /fooapp/bardir/ (with a slash at the end). Here is my question: Where in the Tomcat source code does this take place? (I'm looking at 6.0.x but a correct answer for any version would be a great starting point.) The only reference material I can find on this subject is in the Catalina Functional Specifications which states, regarding the Default Servlet: On each HTTP GET request processed by this servlet, the following processing shall be performed: [...] If the requested resource is a directory: If the request path does not end with "/", redirect to a corresponding path with "/" appended so that relative references in welcome files are resolved correctly. However, this functionality does not appear to be in org.apache.catalina.servlets.DefaultServlet; or at least, it's not there exclusively: if I replace the default servlet in web.xml with a servlet whose servlet-class does not exist, directory paths still come back 302 to add the slash, while every other request comes back with an error as expected.

    Read the article

  • DevConnections Spring 2010 Speaker Evals and Tips

    As a conference speaker, I always look forward to hearing from attendees whether they felt my sessions were valuable and worth their time.  Its always gratifying  get a high score, but of course its the (preferably constructive) criticism thats key to continued improvement.  Im by no means the best technical presenter around, and Im always looking for ways to improve. Ive recently spoken at a few events, including TechEd and an Ohio event called Stir Trek.  DevConnections was actually back in April, but theyre just getting their final evals out to speakers.  TechEd, of course, does online evals so immediately after your talks you can see what people think.  Ill try and post my TechEd evals in the next week or so. I gave 3 talks at DevConnections Spring 2010 / VS2010 Launch which I discussed in this previous blog post.  In this follow-up, Im just going to share some eval info and my thoughts on it, albeit a couple of months later. Pragmatic ASP.NET Tips, Tricks, and Tools Evals Turned In: 27 Overall Eval: 3.74 Average Score: 3.47 89% found the technical level Just Right.  7.4% thought it was too basic (3.6% did not respond).  Since nobody thought the content was Too complex, I could perhaps have added some more complex material, but having about 90% say its Just Right is pretty good. 92% said at least 50% of the material was new to them.  36% said 75% or more was new.  Thats also pretty good, I think. 77.8% can use the information immediately; 15% can use it within 2-6 months (7.2 % no response). Overall 78% rated the session Excellent, 18% Good, 4% Fair. All comments (9): Steve did a great job Excellent session! It was good. Im now super excited to attend Steves other sessions later today.  Very useful. One of the best speakers here.  Bring him back to future conferences please. Continue to have this session with new and old stuff.  I always find something I did not know about. Excellent!  This was the best session Ive seen all week. Did not increase font on all pages could not see. For Steve to have had more sessions. Note to self make the fonts bigger across the board.  Otherwise, this is all good for my ego. :)  This is always a very popular session and one I really enjoy giving.  Tips and Tricks talks are pretty easy because you dont have to go in depth with any particular thing, and theyre almost always with existing technology so youre not dealing with betas, lack of documentation, and other issues.  Its an easy session to do well, in my experience, and one which I think attendees definitely appreciate.   Whats New in ASP.NET MVC 2 Evals Turned In: 23 Overall Eval: 3.77 Average Score: 3.47 (wow, I cant believe I scored better on this talk than the tips and tricks talk, which Ive given many times and was more excited about) 96% found the technical level Just Right.  90% found 50% or more of the material to be new.  43% can use the info immediately, and another 43% can use it within 2-6 months I guess that speaks to adoption rates of MVC 2 among my attendees Overall 74% said the session was Excellent, 22% Good.  4% No Response. All Comments (6): Great job, thank you. Great speaker! Really good, a little lost in the code at some points, but great information. Speaker needs to repeat questions from audience for everyone to hear. Exceeded my expectations. Great speaker, very informative. I really do try to religiously repeat questions from the audience for everyone to hear, but obviously I didnt do it 100% of the time.  Note to self remember to repeat questions.  That and making fonts big are really basic speaker best practices, which just goes to prove that fundamentals are always something that can be perfected.   SOLIDify Your ASP.NET MVC 2 Application Evals Turned In: 8 (!) Overall Eval: 3.63 Average Score: 3.47 As I recall this was one of the last talks of the day / show, which might account for the low number of evals turned in.  I dont recall speaking to an empty room for this talk, although it certainly wasnt as crowded as the tips and tricks talk. 100% found the technical level Just Right.  100% found at least half the material new.  62.5% can use it at once and 37.5% within 2-6 months.  62.5% rated the session Excellent overall; 37.5% Good.  Im thinking there were 5 evals with all 4s checked and 3 with all 3s checked (4 = Excellent, 3 = Good) All Comments (3): This covered many topics Ive read about recently, and it helped reinforce them. It was a nice overview of the solid principle, but I thought there might be specifics for MVC2.  I am glad there is not. Move a little slower. Ok, so another fundamental dont go too fast.  Looks like I got one fundamental tip from the comments of each talk. My Take-Aways Remember the fundamentals.  Its worth going through a checklist prior to presenting to make sure these things are fresh in your mind.  Increase all font sizes.  Repeat all questions from audience members without microphones (this is also a great way to stall for time, btw).  Resist the urge to move too quickly especially if youre nervous or short of time.  Writing this up in a blog post also further reinforces these fundamentals for me, which is one of the main reasons why I do it I retain things better when I write them, and even moreso when I write them for public consumption since I have to really think about what Im saying.  And maybe a few of you find this interesting or helpful, which is a bonus. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Why am I getting a permission denied error on my public folder?

    - by Robin Fisher
    Hi all, This one has got me stumped. I'm deploying a Rails 3 app to Slicehost running Apache 2 and Passenger. My server is running Ruby 1.9.1 using RVM. I am receiving a permission denied error on the "public" folder in my app. My Virtual Host is setup as follows: <VirtualHost *:80> ServerName sharerplane.com ServerAlias www.sharerplane.com ServerAlias *.sharerplane.com DocumentRoot /home/robinjfisher/public_html/sharerplane.com/current/public/ <Directory "/home/robinjfisher/public_html/sharerplane.com/public/"> AllowOverride all Options -MultiViews Order allow,deny Allow from all </Directory> PassengerDefaultUser robinjfisher </VirtualHost> I've tried the following things: trailing slash on public; no trailing slash on public; PassengerUserSwitching on and off; PassengerDefaultUser set and not set; with and without the block. The public folder is owned by robinjfisher:www-data and Passenger is running as robinjfisher so I can't see why there are permission issues. Does anybody have any thoughts? Thanks Robin PS. Have disabled the site for the time being to avoid indexing so what is there currently is not the site in question.

    Read the article

  • .htaccess Redirect Loop, adding multiple .php extensions

    - by Ryan Smith
    I have sort of a small parent/teacher social network set up for my school. I use my .htaccess file to make the links to teacher profiles cleaner and to add a trailing slash to all urls. I get this problem when going to /teachers/teacher-name/ the link (sometimes) redirects to /teachers/teacher-name.php.php.php.php.php.php.php.php... Below is my .htaccess file. Sometimes if I clear my browser cache in Chrome it temporarily fixes it. I can't exactly wright .htaccess syntax, but I'm pretty familiar with it. Any suggestions are appreciated! RewriteEngine on RewriteBase / #remove php ext RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^([^/]+)/$ $1.php RewriteRule ^([^/]+)/([^/]+)/$ $1/$2.php #force trailing slash/ RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)([^/])$ /$1$2/ [L,R=301] #other rewrites RewriteRule ^teachers/([^/\.]+)/$ /teachers/profile.php?u=$1 RewriteRule ^teachers/([^/\.]+)/posts/$ /teachers/posts.php?u=$1 RewriteRule ^teachers/([^/\.]+)/posts/([^/\.]+)/$ /teachers/post.php?u=$1&p=$2 RewriteRule ^gallery/([^/\.]+)/$ /gallery/album.php?n=$1 RewriteRule ^gallery/([^/\.]+)/slideshow/$ /gallery/slideshow.php?n=$1 RewriteRule ^gallery/([^/\.]+)/([^/\.]+)/([^/\.]+)/$ /gallery/photo.php?a=$1&p=$2&e=$3 EDIT:I have attached a screenshot of exactly what I'm talking about.

    Read the article

  • Best way to create a SPARQL endpoint for a RDBMS (MySQL database)

    - by Ankur
    I am doing (want to do) some experiments with Linked Open Datasets particularly those put out by governments. I have a RDBMS (more specifically MySQL). I designed it with semantic web ideas in mind i.e. I have a information stored as objects, predicates and classes which define objects. In turn all objects are related to each other though statements of the form subject -- predicate -- object (where the subjects are from the objects table). I want to be able to query other RDF triple stores from my application and let other triple stores query my data. Is it possible to "set something up" so that this is possible? I have looked at Jena. Using Jena seems to mean I have to it as a storage application rather than MySQL - the only problem with this is that I include a new concept called a category (which I don't think is part of the semantic web languages). I will use categories to help with displaying information (they don't have any other meaning) but using Jena seems to mean that I can't organise predicates under categories for more convenient viewing. I am using Java so a JAVA API is preferred. It's also possible I misunderstood the purpose of Jena, and maybe that can be of use, but I am not sure how. I am sure four days from now this question will seem rather silly, but at the moment I am somewhat confused about how to proceed.

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • How to use boost::fusion::transform on heterogeneous containers?

    - by Kyle
    Boost.org's example given for fusion::transform is as follows: struct triple { typedef int result_type; int operator()(int t) const { return t * 3; }; }; // ... assert(transform(make_vector(1,2,3), triple()) == make_vector(3,6,9)); Yet I'm not "getting it." The vector in their example contains elements all of the same type, but a major point of using fusion is containers of heterogeneous types. What if they had used make_vector(1, 'a', "howdy") instead? int operator()(int t) would need to become template<typename T> T& operator()(T& const t) But how would I write the result_type? template<typename T> typedef T& result_type certainly isn't valid syntax, and it wouldn't make sense even if it was, because it's not tied to the function.

    Read the article

  • mod_rewrite: no access to real files and directories

    - by tshabalala
    Hello. I use mod_rewrite/.htaccess for pretty URLs. I forward all the requests to my index.php, like this: RewriteRule ^/?([a-zA-Z0-9/-]+)/?$ /index.php [NC,L] The index.php then handles the requests. I'm also using this condition/rule to eliminate trailing slashes (or rather rewrite them to the URL without a trailing slash, with a 301 redirect; I'm doing this to avoid duplicate content and because I like no trailing slashes better): RewriteCond %{HTTP_HOST} !^\.localhost$ [NC] RewriteRule ^(.+)/$ http://%{HTTP_HOST}/$1 [R=301,L] This works well, except that I now get an infinite loop when trying to access a (real) directory (the rewrite rule removes the trailing slash, the server adds it again, ...). I solved this by setting the DirectorySlash directive to Off: DirectorySlash Off I don't know how good this solution is, I don't feel too confident about it tbh. Anyway, what I'd like to do is completely ignore "real" files and directories, since I don't need them and I only use pretty URLs with "virtual" files/directories anyway. This would allow me to avoid the DirectorySlash workaround/hack too. Is this possible? Thanks!

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • SQL SERVER – Basic Calculation and PEMDAS Order of Operation

    - by pinaldave
    After thinking a long time, I have decided to write about this blog post. I had no plan to create a blog post about this subject but the amount of conversation this one has created on my Facebook page, I decided to bring up a few of the question and concerns discussed on the Facebook page. There are more than 10,000 comments here so far. There are lots of discussion about what should be the answer. Well, as far as I can tell there is a big debate going on on Facebook, for educational purpose you should go ahead and read some of the comments. They are very interesting and for sure teach some new stuff. Even though some of the comments are clearly wrong they have made some good points and I believe it for sure develops some logic. Here is my take on this subject. I believe the answer is 9 as I follow PEMDAS  Order of Operation. PEMDAS stands for  parentheses, exponents, multiplication, division, addition, subtraction. PEMDAS is commonly known as BODMAS in India. BODMAS stands for Brackets, Orders (ie Powers and Square Roots, etc), Division, Multiplication,  Addition and Subtraction. PEMDAS and BODMAS are almost same and both of them follow the operation order from LEFT to RIGHT. Let us try to simplify above statement using the PEMDAS or BODMAS (whatever you prefer to call). Step 1: 6 ÷ 2 (1+2) (parentheses first) Step 2: = 6 ÷ 2 * (1+2) (adding multiplication sign for further clarification) Step 3: = 6 ÷ 2* (3) (single digit in parentheses – simplify using operator) Step 4: = 6 ÷ 2 * 3 (Remember next Operation should be LEFT to RIGHT) Step 5: = 3 * 3 (because 6 ÷ 2 = 3; remember LEFT to RIGHT) Step 6: = 9 (final answer) Some often find Step 4 confusing and often ended up multiplying 2 and 3 resulting Step 5 to be 6 ÷ 6, this is incorrect because in this case we did not follow the order of LEFT to RIGHT. When we do not follow the order of operation from LEFT to RIGHT we end up with the answer 1 which is incorrect. Let us see what SQL Server returns as a result. I executed following statement in SQL Server Management Studio SELECT 6/2*(1+2) It is clear that SQL Server also thinks that the answer should be 9. Let us go ahead and ask Google what will be the answer of above question in Google I have searched for the following term: 6/2(1+2) The result also says the answer should be 9. If you want a further reference here is a great video which describes why the answer should be 9 and not 1. And here is a fantastic conversation on Google Groups. Well, now what is your take on this subject? You are welcome to share constructive feedback and your answer may be different from my answer. NOTE: A healthy conversation about this subject is indeed encouraged but if there is a single bad word or comment is flaming it will be deleted without any notification (it does not matter how valuable information it contains). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Tweak Conky Layout via a script

    - by begtognen
    I'm using a script in Conky in order to display my new gmail on my desktop. It works beautifully, but is kind of ugly, and I'm not sure how to fix it. What I've currently got looks like this: And what I'd like is this: Any ideas for how to make that happen are much appreciated. Here's the script I'm currently using (I think I've snipped out the correct part, if I haven't please let me know.) #!/usr/bin/perl use Switch; use Text::Wrap; my $what=$ARGV[0]; $user="username"; #username for gmail account $pass="password"; #password for gmail account $file="/tmp/gmail.html"; #temporary file to store gmail #wrap format for subject $Text::Wrap::columns=65; #Number of columns to wrap subject at $initial_tab=""; #Tab for first line of subject $subsequent_tab="\t"; #tab for wrapped lines $quote="\""; #put quotes around subject #limit the number of emails to be displayed $emails=-1; #if -1 display all emails &passwd; #give password the proper url character encoding switch($what){ #determine what the user wants case "n" {&gmail; print "$new\n";} #print number of new emails case "s" { #print $from and $subj for new email &gmail; if ($new0){ my $size=@from; if ($emails!=-1 && $size$emails){$size=$emails;} #limit number of emails displayed for(my $i=0; $i$emails){print "$emails out of $size new emails displayed\n";} } } case "e" { #print number of new emails, $from, and $subj &gmail; if($new==0){print "You have no new emails.\n";} else{ print "You have $new new email(s).\n"; my $size=@from; if ($emails!=-1 && $size$emails){$size=$emails;} #limit number of emails displayed for(my $i=0; $i$emails){print "$emails out of $size new emails displayed\n";} } } else { print "Usage Error: gmail.pl \n"; print "\tn displays number of new emails\n"; print "\ts displays from line and subject line for each new email.\n"; print "\te displays the number of new emails and from line plus \n"; print "\t\tsubject line for each new email.\n"; } #didn't give proper option } sub gmail{ if(!(-e $file)){ #create file if it does not exists `touch $file`; } #get new emails `wget -O - https://$user:$pass\@mail.google.com/mail/feed/atom --no-check-certificate $file`; open(IN, $file); #open $file my $i=0; #initialize count $new=0; #initialize new emails to 0 my $flag=0; while(){ #cycle through $file if(//){$flag=1;} elsif(/(\d+)/){$new=$1;} #grab number of new emails elsif($flag==1){ if(/.+/){push(@subj, &msg);} #grab new email titles elsif(/(.+)/){push(@from, $1); $flag=0;} #grab new email from lines } } close(IN); #close $file } sub passwd{ #change to url escape codes in password #URL ESCAPE CODES $_=$pass; s/\%/\%25/g; s/\#/\%23/g; s/\$/\%24/g; s/\&/\%26/g; s/\//\%2F/g; s/\:/\%3A/g; s/\;/\%3B/g; s/\/\%3E/g; s/\?/\%3F/g; s/\@/\%40/g; s/\[/\%5B/g; s/\\/\%5C/g; s/\]/\%5D/g; s/\^/\%5E/g; s/\`/\%60/g; s/\{/\%7B/g; s/\|/\%7C/g; s/\}/\%7D/g; s/\~/\%7E/g; $pass=$_; } sub msg{ #THE HTML CODED CHARACTER SET [ISO-8859-1] chomp; s/(.+)/$1/; #get just the subject #now replace any special characters s/\&\#33\;/!/g; #Exclamation mark s/\&\#34\;/"/g; s/\"\;/"/g; #Quotation mark s/\&\#35\;/#/g; #Number sign s/\&\#36\;/\$/g; #Dollar sign s/\&\#37\;/%/g; #Percent sign s/\&\#38\;/&/g; s/\&\;/&/g; #Ampersand s/\&\#39\;/'/g; #Apostrophe s/\&\#40\;/(/g; #Left parenthesis s/\&\#41\;/)/g; #Right parenthesis s/\&\#42\;/*/g; #Asterisk s/\&\#43\;/+/g; #Plus sign s/\&\#44\;/,/g; #Comma s/\&\#45\;/-/g; #Hyphen s/\&\#46\;/./g; #Period (fullstop) s/\&\#47\;/\//g; #Solidus (slash) s/\&\#58\;/:/g; #Colon s/\&\#59\;/\;/g; #Semi-colon s/\&\#60\;//g; s/\>\;//g; #Greater than s/\&\#63\;/\?/g; #Question mark s/\&\#64\;/\@/g; #Commercial at s/\&\#91\;/\[/g; #Left square bracket s/\&\#92\;/\\/g; #Reverse solidus (backslash) s/\&\#93\;/\]/g; #Right square bracket s/\&\#94\;/\^/g; #Caret s/\&\#95\;/_/g; #Horizontal bar (underscore) s/\&\#96\;/\`/g; #Acute accent s/\&\#123\;/\{/g; #Left curly brace s/\&\#124\;/|/g; #Vertical bar s/\&\#125\;/\}/g; #Right curly brace s/\&\#126\;/~/g; #Tilde s/\&\#161\;/¡/g; #Inverted exclamation s/\&\#162\;/¢/g; #Cent sign s/\&\#163\;/£/g; #Pound sterling s/\&\#164\;/¤/g; #General currency sign s/\&\#165\;/¥/g; #Yen sign s/\&\#166\;/¦/g; #Broken vertical bar s/\&\#167\;/§/g; #Section sign s/\&\#168\;/¨/g; #Umlaut (dieresis) s/\&\#169\;/©/g; s/\©\;/©/g; #Copyright s/\&\#170\;/ª/g; #Feminine ordinal s/\&\#171\;/«/g; #Left angle quote, guillemotleft s/\&\#172\;/¬/g; #Not sign s/\&\#174\;/®/g; #Registered trademark s/\&\#175\;/¯/g; #Macron accent s/\&\#176\;/°/g; #Degree sign s/\&\#177\;/±/g; #Plus or minus s/\&\#178\;/²/g; #Superscript two s/\&\#179\;/³/g; #Superscript three s/\&\#180\;/´/g; #Acute accent s/\&\#181\;/µ/g; #Micro sign s/\&\#182\;/¶/g; #Paragraph sign s/\&\#183\;/·/g; #Middle dot s/\&\#184\;/¸/g; #Cedilla s/\&\#185\;/¹/g; #Superscript one s/\&\#186\;/º/g; #Masculine ordinal s/\&\#187\;/»/g; #Right angle quote, guillemotright s/\&\#188\;/¼/g; s/\¼\;/¼/g; # Fraction one-fourth s/\&\#189\;/½/g; s/\½\;/½/g; # Fraction one-half s/\&\#190\;/¾/g; s/\¾\;/¾/g; # Fraction three-fourths s/\&\#191\;/¿/g; #Inverted question mark s/\&\#192\;/À/g; #Capital A, grave accent s/\&\#193\;/Á/g; #Capital A, acute accent s/\&\#194\;/Â/g; #Capital A, circumflex accent s/\&\#195\;/Ã/g; #Capital A, tilde s/\&\#196\;/Ä/g; #Capital A, dieresis or umlaut mark s/\&\#197\;/Å/g; #Capital A, ring s/\&\#198\;/Æ/g; #Capital AE dipthong (ligature) s/\&\#199\;/Ç/g; #Capital C, cedilla s/\&\#200\;/È/g; #Capital E, grave accent s/\&\#201\;/É/g; #Capital E, acute accent s/\&\#202\;/Ê/g; #Capital E, circumflex accent s/\&\#203\;/Ë/g; #Capital E, dieresis or umlaut mark s/\&\#204\;/Ì/g; #Capital I, grave accent s/\&\#205\;/Í/g; #Capital I, acute accent s/\&\#206\;/Î/g; #Capital I, circumflex accent s/\&\#207\;/Ï/g; #Capital I, dieresis or umlaut mark s/\&\#208\;/Ð/g; #Capital Eth, Icelandic s/\&\#209\;/Ñ/g; #Capital N, tilde s/\&\#210\;/Ò/g; #Capital O, grave accent s/\&\#211\;/Ó/g; #Capital O, acute accent s/\&\#212\;/Ô/g; #Capital O, circumflex accent s/\&\#213\;/Õ/g; #Capital O, tilde s/\&\#214\;/Ö/g; #Capital O, dieresis or umlaut mark s/\&\#215\;/×/g; #Multiply sign s/\&\#216\;/Ø/g; #Capital O, slash s/\&\#217\;/Ù/g; #Capital U, grave accent s/\&\#218\;/Ú/g; #Capital U, acute accent s/\&\#219\;/Û/g; #Capital U, circumflex accent s/\&\#220\;/Ü/g; #Capital U, dieresis or umlaut mark s/\&\#221\;/Ý/g; #Capital Y, acute accent s/\&\#222\;/Þ/g; #Capital THORN, Icelandic s/\&\#223\;/ß/g; #Small sharp s, German (sz ligature) s/\&\#224\;/à/g; #Small a, grave accent s/\&\#225\;/á/g; #Small a, acute accent s/\&\#226\;/â/g; #Small a, circumflex accent s/\&\#227\;/ã/g; #Small a, tilde s/\&\#228\;/ä/g; #Small a, dieresis or umlaut mark s/\&\#229\;/å/g; #Small a, ring s/\&\#230\;/æ/g; #Small ae dipthong (ligature) s/\&\#231\;/ç/g; #Small c, cedilla s/\&\#232\;/è/g; #Small e, grave accent s/\&\#233\;/é/g; #Small e, acute accent s/\&\#234\;/ê/g; #Small e, circumflex accent s/\&\#235\;/ë/g; #Small e, dieresis or umlaut mark s/\&\#236\;/ì/g; #Small i, grave accent s/\&\#237\;/í/g; #Small i, acute accent s/\&\#238\;/î/g; #Small i, circumflex accent s/\&\#239\;/ï/g; #Small i, dieresis or umlaut mark s/\&\#240\;/ð/g; #Small eth, Icelandic s/\&\#241\;/ñ/g; #Small n, tilde s/\&\#242\;/ò/g; #Small o, grave accent s/\&\#243\;/ó/g; #Small o, acute accent s/\&\#244\;/ô/g; #Small o, circumflex accent s/\&\#245\;/õ/g; #Small o, tilde s/\&\#246\;/ö/g; #Small o, dieresis or umlaut mark s/\&\#247\;/÷/g; #Division sign s/\&\#248\;/ø/g; #Small o, slash s/\&\#249\;/ù/g; #Small u, grave accent s/\&\#250\;/ú/g; #Small u, acute accent s/\&\#251\;/û/g; #Small u, circumflex accent s/\&\#252\;/ü/g; #Small u, dieresis or umlaut mark s/\&\#253\;/ý/g; #Small y, acute accent s/\&\#254\;/þ/g; #Small thorn, Icelandic s/\&\#255\;/ÿ/g; #Small y, dieresis or umlaut mark s/^\s+//; return $_; }

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

  • Enjoy How-To Geek User Style Script Goodness

    - by Asian Angel
    Most people may not be aware of it but there are two user style scripts that have been created just for use with the How-To Geek website. If you are curious then join us as we look at these two scripts at work. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox for our examples here. The How-to Geek Wide User Style Script The first of the two scripts affects the viewing width of the website’s news content. Here you can see everything set at the normal width. When you visit the UserStyles website you will be able to view basic information about the script and see the code itself if desired. On the right side of the page is the good part though. Since we are using Firefox with Greasemonkey installed we chose the the “install as a user script option”. Notice that the script is available for other browsers as well (very nice!) Within a few moments of clicking on the “install as a user script button” you will see the following window asking confirmation for installing the script. After installing the user style script and refreshing the page it has now stretched out to fill 90% of the browser window’s area. Definitely nice! The How-To Geek – News and Comments (600px) User Style Script The second script can be very useful for anyone with the limited screen real-estate of a netbook. You can see another of the articles from here at the site viewed in a  “normal mode”. Once again you can view basic information about this particular user style script and view the code if desired. As above we have the Firefox/Greasemonkey combination at work so we installed as a user script. This is one of the great things about using Greasemonkey…it always checks with you to make certain that no unauthorized scripts are added. Once the script was installed and we refreshed the page things looked very very different. All the focus has been placed on the article itself and any comments attached to the article. For those who may be curious this is what the homepage looks like using this script. Conclusion If you have been wanting to add a little bit of “viewing spice” to your browser for the How-To Geek website then definitely pop over to the User Styles website and give these two scripts a try. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the How-to Geek Wide User Style Script Install the How-To Geek – News and Comments (600px) User Style Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Download the Stylish extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Set Up User Scripts in Opera BrowserSet Gmail as Default Mail Client in UbuntuHide Flash Animations in Google ChromeShell Script to Upload a File to the Same Subdirectory on a Remote ServerAutomate Adding Bookmarks to del.icio.us TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • AccelerometerInput XNA GameComponent

    - by Michael B. McLaughlin
    Bad accelerometer controls kill otherwise good games. I decided to try to do something about it. So I create an XNA GameComponent called AccelerometerInput. It’s still a beta project but you are welcome to try it, use it, modify it, etc. I’m releasing under the terms of the Microsoft Public License. Important info: First, it only supports tilt-style controls currently. I have not implemented motion-style controls yet (and make no promises as to when I might find time to do so). Second, I commented it heavily so that you can (hopefully) understand what it is doing. Please read the comments and examine the sample game for a usage overview. There are configurable parameters which I encourage you to make use of (both by modifying the default values where your testing shows it to be appropriate and also by implementing a calibration mechanism in your game that lets the user adjust those configurable values based on his or her own circumstances). Third, even with this code, accelerometer controls are still a fairly advanced topic area; you will likely find nothing but disappointment if you simply plunk this into some project without testing it on a device (or preferably on several devices). Fourth, if you do try this code and find that something doesn’t work as expected on your phone, please let me know as I want to improve it and can only do so with your help. Let me know what phone model it is, what you tried doing, what you expected, and what result you had instead. I may or may not be able to incorporate it into the code, but I can let others know at the very least so that they can make appropriate modifications to their games (I’m hopeful that all phones are reasonably similar in their workings and require, at most, a slight calibration change, but I simply don’t know). Fifth, although I’ll do my best to answer any questions you may have about it, I’m very busy with a number of things currently so it might take a little while. Please look through the code and examine the comments and sample game first before asking any questions. It’s likely that the answer is in there. If not, or if you just aren’t really sure, ask away. Sixth, there are differences between a portrait-mode game and a landscape mode game (specifically in the appropriate default tilt adjustment for toward the user/away from the user calculations). This is documented and the default is set for landscape. If you use this for a portrait game, make the appropriate change (look for the TODO: comment in AccelerometerInput.cs). Seventh, no provision whatsoever is made for disabling screen locking. It is up to you to implement that and to take appropriate measures to detect when the user has been idle for too long and timeout the game. That code is very game-specific. If you have questions about such matters, consult the relevant MSDN documentation and, if you still have questions, visit the App Hub forums and ask there. I answer questions there a lot and so I may even stumble across your question and answer it. But that’s a much better forum than the comments section here for questions of that sort so I would appreciate it if you asked idle detection-related questions there (or on some other suitable site that you may be more familiar and comfortable with). Eighth, this is an XNA GameComponent intended for XNA-based games on WP7. A sufficiently knowledgeable Silverlight developer should have no problem adapting it for use in a Silverlight game or app. I may create a Silverlight version at some point myself. Right now I do not have the time, unfortunately. Ok. Without further ado: http://www.bobtacoindustries.com/developers/utils/AccelerometerInput.zip Have a great St. Patrick’s Day!

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • How can I use external expressions in Linq with EF4 (and LINQKit)?

    - by neo
    I want to separate out often used expressions in linq queries. I'm using Entity Framework 4 and also LINQKit but I still don't know how I should do it the right way. Let me give you an example: Article article = dataContainer.Articles.FirstOrDefault(a => a.Id == id); IEnumerable<Comment> comments = (from c in container.Comments where CommentExpressions.IsApproved.Invoke(c) select c); public static class CommentExpressions { public static Expression<Func<Module, bool>> IsApproved { get { return m => m.IsApproved; } } } Of course the IsApproved expression would be something much more complicated. The problem is that the Invoke() won't work because I didn't call .asExpandable() from LINQKit on container.Comments but I can't call it because it's just an ICollection instead of an ObjectSet. So my question is: Do I always have to go through the data context when I want to include external expressions or can I somehow use it on the object I fetched (Article)? Any ideas or best practices? Thanks very much! neo

    Read the article

  • Form is submitting when the page loads

    - by RailAddict
    I have a really simple Rails app. Basically an article with comments. I want the article page to show the article, comments underneath and then a textbox to write a comment and a submit button to submit it. I got it all working except for one (big) problem. When the page loads.. example.com/article/1 a blank comment is submitted to the database. I fixed that by including "validates_presence_of :body" in the Comment model. But that results in the following image when the page loads: This is my code by the way: def show @place = Article.find(params[:id]) @comment = Article.find(params[:id]).comments.create(params[:comment]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @article } end end and <% form_for([@article, @comment]) do |f| %> <p> <%= f.label :commenter %><br /> <%= f.text_field :commenter %> </p> <p> <%= f.label :body %><br /> <%= f.text_area :body %> </p> <p> <%= f.submit "Create" %> </p> <% end %>

    Read the article

  • CStdioFile Undeclared Identifier

    - by Eric Regnier
    I am unable to compile my code when using a CStdioFile type. I have included the afx.h header file but I am still receiving errors. What I am trying to do is write a cstring that contains an entire xml document to file. This cstring contains comments inside of it, but when I use other objects such as wofstream to write it to file, these comments are striped out for some reason. So that is why I am trying to write this to file using CStdioFile now. If anyone can help me out as to why I cannot compile my code using CStdioFile, or any other way to write a cstring that contains xml comments to file, please let me know! Below is my code using CStdioFile: CStdioFile xmlFile; xmlFile.Open( "c:\\test.txt", CFile::modeCreate | CFile::modeWrite | CFile::typeText ); xmlFile.WriteString( m_sData ); xmlFile.Close(); And my errors: error C2065: 'CStdioFile' : undeclared identifier error C2065: 'modeCreate' : undeclared identifier error C2065: 'modeWrite' : undeclared identifier error C2065: 'typeText' : undeclared identifier error C2065: 'xmlFile' : undeclared identifier error C2146: syntax error : missing ';' before identifier 'xmlFile' error C2228: left of '.Close' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.Open' must have class/struct/union type type is ''unknown-type'' error C2228: left of '.WriteString' must have class/struct/union type type is ''unknown-type'' error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C2653: 'CFile' : is not a class or namespace name error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup error C3861: 'xmlFile': identifier not found, even with argument-dependent lookup

    Read the article

  • [EF 4 POCO] Problem with INSERT...

    - by Darmak
    Hi all, I'm so frustrated because of this problem, you have no idea... I have 2 classes: Post and Comment. I use EF 4 POCO support, I don't have foreign key columns in my .edmx model (Comment class doesn't have PostID property, but has Post property) class Comment { public Post post { get; set; } // ... } class Post { public virtual ICollection<Comment> Comments { get; set; } // ... } Can someone tell me why the code below doesn't work? I want to create a new comment for a post: Comment comm = context.CreateObject<Comment>(); Post post = context.Posts.Where(p => p.Slug == "something").SingleOrDefault(); // post != null, so don't worry, be happy // here I set all other comm properties and... comm.Post = post; context.AddObject("Comments", comm); // Exception here context.SaveChanges(); The Exception is: Cannot insert the value NULL into column 'PostID', table 'Blog.Comments'; column does not allow nulls. INSERT fails. ... this 'PostID' column is of course a foreign key to the Posts table. Any help will be appreciated!

    Read the article

  • How can I merge multiple Compass Resources into one, with one score?

    - by Brent Fisher
    I am trying to integrate compass into my platform using the JDBC ResultSetToResourceMapping. What I want to do is set it up so that I could have multiple result set mappings, tied to one Resource, that produces one result, with one score, and a merged score. I have tried to trick Compass into doing this by mapping the same id across them, even though the property fields are different, but it just ends up giving me separate hits for each. E.g. I have the following Data Model, Cases and Comments. One case might have several comments. Say I search for a term that appears in multiple comments. Right now, I a hit for each one, each with a different score. Is there a way that I could merge or aggregate those hits into one hit? Say, instead of Score Entity ID Snippets 100.0% Case 3558 ... The fox jumped over the lazy dog ... 60.0% Case 3558 ... In Alabama today, three jumping turtles were ... 25.0% Case 3558 ... Three jumpers fled the scene... I get Score Entity ID Snippets 100.0% Case 3558 The fox jumped over the lazy dog ...In Alabama today, three jumping turtles were ... Three jumpers fled the scene... Where the latter score is an aggregated score.

    Read the article

  • Reload jQuery when returning partial view from a controller?

    - by mattruma
    I am making the following call in my web page: <div id="comments"> <fieldset> <h4> Post your comment</h4> <% using (this.Ajax.BeginForm("CreateStoryComment", "Story", new { id = story.StoryId }, new AjaxOptions { UpdateTargetId = "comments", OnSuccess = "OnStoryCommentAdded" })) { %> <%= this.Html.TextArea("Body", string.Empty)%> <input type="submit" value="Add Comment" /> <% } %> </fieldset> </div> There's other code, but that's the gist of it. The controller returns a partial view that "refreshes" everying in the comments div. My problem is that the following jQuery is not being applied: $(".comment .delete").click(function () { if (confirm("Are you sure you want to delete this record?") == true) { $.post(this.href); $(this).parents(".comment").fadeOut("normal"); } return false; }); I'm assuming it's not being attached because the jQuery loads after the inital page load. If my assumption is correct, how do I get this jQuery to "refresh". Hopefully that makes some sense! :)

    Read the article

  • Detect IE version in Javascript

    - by Chad Decker
    I want to bounce users of our web site to an error page if they're using a version of Internet Explorer prior to v9. It's just not worth our time and money to support IE pre-v9. Users of all other non-IE browsers are fine and shouldn't be bounced. Here's the proposed code: if(navigator.appName.indexOf("Internet Explorer")!=-1){ //yeah, he's using IE var badBrowser=( navigator.appVersion.indexOf("MSIE 9")==-1 && //v9 is ok navigator.appVersion.indexOf("MSIE 1")==-1 //v10, 11, 12, etc. is fine too ); if(badBrowser){ // navigate to error page } } Will this code do the trick? To head off a few comments that will probably be coming my way: [1] Yes, I know that users can forge their useragent string. I'm not concerned. [2] Yes, I know that programming pros prefer sniffing out feature-support instead of browser-type but I don't feel this approach makes sense in this case. I already know that all (relevant) non-IE browsers support the features that I need and that all pre-v9 IE browsers don't. Checking feature by feature throughout the site would be a waste. [3] Yes, I know that someone trying to access the site using IE v1 (or = 20) wouldn't get 'badBrowser' set to true and the warning page wouldn't be displayed properly. That's a risk we're willing to take. [4] Yes, I know that Microsoft has "conditional comments" that can be used for precise browser version detection. IE no longer supports conditional comments as of IE 10, rendering this approach absolutely useless. Any other obvious issues to be aware of? Thanks.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >