Search Results

Search found 9016 results on 361 pages for 'regex libraries'.

Page 158/361 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Missing shared library for Rhythmbox

    - by user1450120
    After I upgraded from 13.04 to 13.10 my rhythmbox wouldn't work. After many failed attempts I ended up uninstalling and removing all traces of rhythmbox I could find. Now I've reinstalled rhythmbox, and am getting the error rhythmbox: error while loading shared libraries: librhythmbox-core.so.7: cannot open shared object file: No such file or directory I've tried sudo apt-get install librhythmbox* Only to get Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'librhythmbox-core5' for regex 'librhythmbox*' Note, selecting 'librhythmbox-core6' for regex 'librhythmbox*' Note, selecting 'librhythmbox-core7' for regex 'librhythmbox*' librhythmbox-core7 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. Any ideas on how to get rhythmbox back to a working state?

    Read the article

  • Qt4Dotnet on Mac OS X

    - by Tony
    Hello everyone. I'm using Qt4Dotnet project in order to port application originally written in C# on Linux and Mac. Port to Linux hasn't taken much efforts and works fine. But Mac (10.4 Tiger) is a bit more stubborn. The problem is: when I try to start my application it throws an exception. Exception states that com.trolltech.qt.QtJambi_LibraryInitializer is unable to find all necessary ibraries. QtJambi library initializer uses java.library.path VM environment variable. This variable includes current working directory. I put all necessary libraries in a working directory. When I try to run the application from MonoDevelop IDE, initializer is able to load one library, but the other libraries are 'missing': An exception was thrown by the type initializer for com.trolltech.qt.QtJambi_LibraryInitializer --- java.lang.RuntimeException: Loading library failed, progress so far: No 'qtjambi-deployment.xml' found in classpath, loading libraries via 'java.library.path' Loading library: 'libQtCore.4.dylib'... - using 'java.library.path' - ok, path was: /Users/chin/test/bin/Debug/libQtCore.4.dylib Loading library: 'libqtjambi.jnilib'... - using 'java.library.path' Both libQtCore.4.dylib and libqtjambi.jnilib are in the same directory. When I try to run it from the command prompt, the initializer is unable to load even libQtCore.4.dylib. I'm using Qt4Dotnet v4.5.0 (currently the latest) with QtJambi v4.5.2 libraries. This might be the source of the problem, but I'm neither able to compile Qt4Dotnet v4.5.2 by myself nor to find QtJambi v4.5.0 libraries. Project's page states that some sort of patch should be applied to QtJambi's source code in order to be compatible with Mono framework, but this patch hasn't been released yet. Without this patch application crashes in a strange manner (other than library seek fault). I must note that original QtJambi loads all necessary libraries perfectly, so it might be issues of IKVM compiler used to translate QtJambi into .Net library. Any suggestions how can I overcome this problem?

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Using PHP OCI8 with 32-bit PHP on Windows 64-bit

    - by christopher.jones
    The world migration from 32-bit to 64-bit operating systems is gaining pace. However I've seen a couple of customers having difficulty with the PHP OCI8 extension and Oracle DB on Windows 64-bit platforms. The errors vary depending how PHP is run. They may appear in the Apache or PHP log: Unable to load dynamic library 'C:\Program Files (x86)\PHP\ext\php_oci8_11g.dll' - %1 is not a valid Win32 application. or Warning oci_connect(): OCIEnvNlsCreate() failed. There is something wrong with your system - please check that PATH includes the directory with Oracle Instant Client libraries Other than IIS permission issues a common cause seems to be trying to use PHP with libraries from an Oracle 64-bit database on the same machine. There is currently no 64-bit version of PHP on http://php.net/ so there is a library mismatch. A solution is to install Oracle Instant Client 32-bit and make sure that PHP uses these libraries, while not interferring with the 64-bit database on the same machine. Warning: The following hacky steps come untested from a Linux user: Unzip Oracle Instant Client 32-bit and move it to C:\WINDOWS\SYSWOW64\INSTANTCLIENT_11_2. You may need to do this in a console with elevated permissions. Edit your PATH environment variable and insert C:\WINDOWS\SYSTEM32\INSTANTCLIENT_11_2 in the directory list before the entry for the Oracle Home library. Windows makes it so all 32-bit applications that reference C:\WINDOWS\SYSTEM32 actually see the contents of the C:\WINDOWS\SYSWOW64 directory. Your 64-bit database won't find an Instant Client in the real, physical C:\WINDOWS\SYSTEM32 directory and will continue to use the database libraries. Some of our Windows team are concerned about this hack and prefer a more "correct" solution that (i) doesn't require changing the Windows system directory (ii) doesn't add to the "memory" burden about what was configured on the system (iii) works when there are multiple database versions installed. The solution is to write a script which will set the 64-bit (or 32-bit) Oracle libraries in the path as needed before invoking the relevant bit-ness application. This does have a weakness when the application is started as a service. As a footnote: If you don't have a local database and simply need to have 32-bit and 64-bit Instant Client accessible at the same time, try the "symbolic" link approach covered in the hack in this OTN forum thread. Reminder warning: This blog post came untested from a Linux user.

    Read the article

  • In what oreder does the Asset-Pipeline in Ruby on Rails load JavaScript Files?

    - by psycatham
    Hello, So, when I decided to remove the tags <script></script> and benefit from the asset-pipeline instead, complications took place. I am working with Google Maps' API V3, and to benefit from their functions and objects that their code provides, you have load the link first <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"></script> Basically, If I put this line before their code, and put their code in script tags, things work out pretty perfecty, but when I use javascript_include_tag instead of script tag in html and copy my code to the file I pointed at -Like This - <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"></script> <%=javascript_include_tag "map_new_marker_drag"%> , the asset-pipeline seems to load That file before loading the link of Google Maps API, thus I get the error : - Uncaught ReferenceError : google is undefined I tried putting the link in javascript_include_tag too -Like this- <%=javascript_include_tag "https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places" %> <%=javascript_include_tag "map_new_marker_drag"%> , and it generated this <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&amp;libraries=places"></script> <script src="https://maps.gstatic.com/cat_js/intl/en_us/mapfiles/api-3/17/2/%7Bmain,places%7D.js" type="text/javascript"></script> <script src="/assets/map_new_marker_drag.js?body=1"></script> and the same error Uncaught ReferenceError : google is undefined. Do I have to put it in another order? what am I missing about the asset-pipeline mechanisms ? What should I do to make the link load before the code so to benefit from their objects and get rid of the error? PS : I tried using jquery functions and so , but I seem not to make it happen. If you still think this is a proper solution, please provide me some code I can use this is the jquery function I used jQuery(function($) { // Asynchronously Load the map API var script = document.createElement('script'); script.src = "http://maps.googleapis.com/maps/api/js?sensor=false&callback=initialize"; document.body.appendChild(script); var scriptTwo = document.createElement('script'); scriptTwo.src = "https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"; document.body.appendChild(scripTwo); });

    Read the article

  • In what order does the Asset-Pipeline in Ruby on Rails load JavaScript Files? [on hold]

    - by psycatham
    So, when I decided to remove the tags <script></script> and benefit from the asset-pipeline instead, complications took place. I am working with Google Maps' API V3, and to benefit from their functions and objects that their code provides, you have load the link first <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"></script> Basically, If I put this line before their code, and put their code in script tags, things work out pretty perfecty, but when I use javascript_include_tag instead of script tag in html and copy my code to the file I pointed at -Like This - <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"></script> <%=javascript_include_tag "map_new_marker_drag"%> , the asset-pipeline seems to load That file before loading the link of Google Maps API, thus I get the error : - Uncaught ReferenceError : google is undefined I tried putting the link in javascript_include_tag too -Like this- <%=javascript_include_tag "https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places" %> <%=javascript_include_tag "map_new_marker_drag"%> , and it generated this <script src="https://maps.googleapis.com/maps/api/js?v=3.exp&amp;libraries=places"></script> <script src="https://maps.gstatic.com/cat_js/intl/en_us/mapfiles/api-3/17/2/%7Bmain,places%7D.js" type="text/javascript"></script> <script src="/assets/map_new_marker_drag.js?body=1"></script> and the same error Uncaught ReferenceError : google is undefined. Do I have to put it in another order? what am I missing about the asset-pipeline mechanisms? What should I do to make the link load before the code so to benefit from their objects and get rid of the error? PS : I tried using jquery functions and so , but I seem not to make it happen. If you still think this is a proper solution, please provide me some code I can use this is the jquery function I used jQuery(function($) { // Asynchronously Load the map API var script = document.createElement('script'); script.src = "http://maps.googleapis.com/maps/api/js?sensor=false&callback=initialize"; document.body.appendChild(script); var scriptTwo = document.createElement('script'); scriptTwo.src = "https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"; document.body.appendChild(scripTwo); });

    Read the article

  • How convert html to BBcode in C#

    - by Dmitriy
    Hello! I need to convert html text into bbcodes. Where i can find how should i do this? For example, I convert links: regex = new Regex("<a href=\"(.+?)\">(.+?)</a>"); htmlCode = regex.Replace(htmlCode, "[URL]$1[/URL]"); How can i convert all html tags in bbcodes (and replace to empty which isn't bb codes, tag P

    Read the article

  • How to use custom IComparer for SortedDictionary?

    - by Magnus Johansson
    I am having difficulties to use my custom IComparer for my SortedDictionary<. The goal is to put email addresses in a specific format ([email protected]) as the key, and sort by last name. When I do something like this: public class Program { public static void Main(string[] args) { SortedDictionary<string, string> list = new SortedDictionary<string, string>(new SortEmailComparer()); list.Add("[email protected]", "value1"); list.Add("[email protected]", "value2"); foreach (KeyValuePair<string, string> kvp in list) { Console.WriteLine(kvp.Key); } Console.ReadLine(); } } public class SortEmailComparer : IComparer<string> { public int Compare(string x, string y) { Regex regex = new Regex("\\b\\w*@\\b", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant | RegexOptions.IgnorePatternWhitespace | RegexOptions.Compiled ); string xLastname = regex.Match(x).ToString().Trim('@'); string yLastname = regex.Match(y).ToString().Trim('@'); return xLastname.CompareTo(yLastname); } } I get this ArgumentException: An entry with the same key already exists. when adding the second item. I haven't worked with a custom IComparer for a SortedDictionary before, and I fail to see my error , what am I doing wrong?

    Read the article

  • Is there a reason to use library backups if I'm backup up full disks?

    - by Ben Brocka
    In Windows Backup I can backup libraries or whole drives (or specific folders). I want a complete backup of all relevant drives. After selecting the drives, there's still the option to backup libraries: Is the backup going to do anything different if I include libraries as well as drives? Should I just backup the whole drive instead? Space used by the backup shouldn't be an issue, since I know the incremental backup is pretty smart..

    Read the article

  • Java Spam Filter

    - by JackSparrow
    I'm trying to create a spam filter in Java using the Bayesian algorithm. I use a text file that contains email messages and split the tokens using regex, storing these values into a hashmap. My problem is, with regex, the email addresses are split so instead of: [email protected] regex causes the token to be: john smith example The same holds true for ip addresses, so for example, instead of: 192.55.34.322 regex splits the tokens to be: 192 55 34 322 So does anybody know of a way that I could read the email messages and store their contents as is?

    Read the article

  • Subscription Based Billing

    - by regex
    Hello All, I'm putting together a small start up company which will be set up with a subscription based billing model. The bill will go to customers on either a monthly or quarterly basis depending on the end user's preference. My question is two parted: I'm new to online billing and I'm only really aware of Pay Pal when it comes to third party bill payment, but this seems more like a check out system. I'm sure there are better alternatives than PayPal for third party billing processors (I have tried Googling for them, but I'm having trouble finding exactly what I'm looking for). What options (companies) are available for third party payment processing and what types of experiences (good or bad) have you had with them? We would like to give our customers the ability to set up recurring payments. I'd rather not store a customer's credit card number on our database as I imagine there are a plethora of compliance guidelines around this. Is there a third party solution for recurring payment processing? On a side note, this is not necessarily a code related question and is more business focused. I wasn't sure if there was a better route for posting this question, and please commont or modify this if there is another route I should take. Thanks!!

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

  • Dynamically Resizing an Iframe

    - by regex
    Hello All, I can see that this question has been asked several times, but none of the proposed solutions seem to work for the site I am building, so I am reopening the thread. I am attempting to size an iframe based on the height of it's content. Both the page that contains the iframe and it's source page exist on the same domain. I have tried the proposed solutions in each of the following threads: Resize iframe height according to content height in it Resizing an iframe based on content I believe that the solutions above are not working because of when the reference to body.clientHeight is made, the browser has not actually determined the height of the document. Here is the code I am using: var ifmBlue = document.getElementById("ifmBlue"); ifmBlue.onload = resizeIframe; function resizeIframe() { var ifmBlue = document.getElementById("ifmBluePill"); var ifmDiv = ifmBlue.contentDocument.getElementById("main"); var height = ifmDiv.clientHeight; ifmBlue.style.height = (ifmBlue.contentDocument.body.scrollHeight || ifmBlue.contentDocument.body.offsetHeight || ifmBlue.contentDocument.body.parentNode.clientHeight || height || 500) + 5 + 'px'; } If I debug the script using fire debug, the client height of the iframe.contentDocument's main div is 0. Additionally, body.offsetHieght, & body.scrollHeight are 0. However, after the script is finished running, if I inspect the DOM of the HTML iframe element (using fire debug) I can see that the body's clientHeight is 456 and the inner div's clientHeight is 742. This leads me to believe that these values are not yet set when iframe.onload is fired. So, per one of the threads above, I moved the code into the body.onload event handler of the iframe's source page. This solution also did not work. Any help you can provide is much appreciated. Thanks, CJ

    Read the article

  • System.OutOfMemoryException was thrown. at Go60505(RegexRunner ) at System.Text.RegularExpressions.C

    - by Deanvr
    Hi All, I am a c# dev working on some code for a website in vb.net. We use a lot of caching on a 32bit iss 6 win 2003 box and in some cases run into OutOfMemoryException exceptions. This is the code I trace it back to and would like to know if anyone else has has this... Public Sub CreateQueryStringNodes() 'Check for nonstandard characters' Dim key As String Dim keyReplaceSpaces As String Dim r As New Regex("^[-a-zA-Z0-9_]+$", RegexOptions.Compiled) For Each key In HttpContext.Current.Request.Form If Not IsNothing(key) Then keyReplaceSpaces = key.Replace(" ", "_") If r.IsMatch(keyReplaceSpaces) Then CreateNode(keyReplaceSpaces, HttpContext.Current.Request(key)) End If End If Next For Each key In HttpContext.Current.Request.QueryString If Not IsNothing(key) Then keyReplaceSpaces = key.Replace(" ", "_") If r.IsMatch(keyReplaceSpaces) Then CreateNode(keyReplaceSpaces, HttpContext.Current.Request(key).Replace("--", "-")) End If End If Next End Sub .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 error: Exception of type 'System.OutOfMemoryException' was thrown. at Go60505(RegexRunner ) at System.Text.RegularExpressions.CompiledRegexRunner.Go() at System.Text.RegularExpressions.RegexRunner.Scan(Regex regex, String text, Int32 textbeg, Int32 textend, Int32 textstart, Int32 prevlen, Boolean quick) at System.Text.RegularExpressions.Regex.Run(Boolean quick, Int32 prevlen, String input, Int32 beginning, Int32 length, Int32 startat) at System.Text.RegularExpressions.Regex.IsMatch(String input) at Xcite.Core.XML.Write.CreateQueryStringNodes() at Xcite.Core.XML.Write..ctor(String IncludeSessionAndPostedData) at mysite._Default.Page_Load(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at thanks

    Read the article

  • ctags doesn't undestand -e option (no exuberant tags option)

    - by Thr4wn
    When I type ctags -e it returns an error saying it doesn't know that command line option. I thought it should know about exuberant tags because etags works on cli. Also, I recieve the following error: ctags: unrecognized option --langdef=arc and I have the following in my ~/.ctags file: --langdef=arc --langmap=arc:.arc --regex-arc=/^\(def ([a-zA-Z1-9_*\/<>-]+)/\1/ --regex-arc=/^\(= ([a-zA-Z1-9_*\/<>-]+)/\1/ --regex-scheme=/^\(xdef ([a-zA-Z1-9_*\/<>-]+)/\1/

    Read the article

  • Finding Common Phrases in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Install i486 .package onto x64 CentOS

    - by medoix
    I am trying to install a ".package" file with Autopackage onto my x64 CentOS server and i receive the below statement. -sh-3.2$ bash armagetronad-dedicated-0.2.8.3.1.i486-generic-linux-gnu.package Sorry, Autopackage only supports x86 32-bit systems, or 64-bit systems with compatibility libraries installed. Please install the compatibility libraries and rerun install. However i cannot find any documentation on what 32-Bit libraries are required or even where to start... Any ideas or suggestions would be appreciated greatly.

    Read the article

  • Form is trying to save the login value of the submit button to my DB.

    - by Sergio Tapia
    Here's my Zend code: <?php require_once ('Zend\Form.php'); class Sergio_Form_registrationform extends Zend_Form { public function init(){ /*********************USERNAME**********************/ $username = new Zend_Form_Element_Text('username'); $alnumValidator = new Zend_Validate_Alnum(); $username ->setRequired(true) ->setLabel('Username:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($username); /*********************EMAIL**********************/ $email = new Zend_Form_Element_Text('email'); $alnumValidator = new Zend_Validate_Alnum(); $email ->setRequired(true) ->setLabel('EMail:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($email); /*********************PASSWORD**********************/ $password = new Zend_Form_Element_Password('password'); $alnumValidator = new Zend_Validate_Alnum(); $password ->setRequired(true) ->setLabel('Password:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($password); /*********************NAME**********************/ $name = new Zend_Form_Element_Text('name'); $alnumValidator = new Zend_Validate_Alnum(); $name ->setRequired(true) ->setLabel('Name:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($name); /*********************LASTNAME**********************/ $lastname = new Zend_Form_Element_Text('lastname'); $alnumValidator = new Zend_Validate_Alnum(); $lastname ->setRequired(true) ->setLabel('Last Name:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($lastname); /*********************DATEOFBIRTH**********************/ $dateofbirth = new Zend_Form_Element_Text('dateofbirth'); $alnumValidator = new Zend_Validate_Alnum(); $dateofbirth->setRequired(true) ->setLabel('Date of Birth:') ->addFilter('StringToLower') ->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]+/')) ->addValidator('stringLength',false,array(6,20)); $this->addElement($dateofbirth); /*********************AVATAR**********************/ $avatar = new Zend_Form_Element_File('avatar'); $alnumValidator = new Zend_Validate_Alnum(); $avatar ->setRequired(true) ->setLabel('Please select a display picture:'); $this->addElement($avatar); /*********************SUBMIT**********************/ $this->addElement('submit', 'login', array('label' => 'Login')); } } ?> Here's the code I use to save the values: public function saveforminformationAction(){ $form = new Sergio_Form_registrationform(); $request = $this->getRequest(); //if($request->isPost() && $form->isValid($_POST)){ $data = $form->getValues(); $db = $this->_getParam('db'); $db->insert('user',$data); //} } When trying to save the values, I recieve a ghastly error: Column 'login' not found.

    Read the article

  • How can I match a twitter username with angular ui router

    - by user3929999
    I need to be able to match a path like '/@someusername' with angular ui router but can't figure out the regex for it. What I have are routes like the following $stateProvider .state('home', {url:'/', templateUrl:'/template/path.html'}) .state('author', {url:'/{username:[regex-to-match-@username-here]}'}) .state('info', {url:'/:slug', templateUrl:'/template/path.html'}) .state('entry', {url:'/:type/:slug', templateUrl:'/template/path.html'}); I need a bit of regex for the 'author' route that will match @usernames. Currently, everything I try is caught by the 'entry' route.

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • Setting up JDK 7 for IntelliJ on the Mac

    - by Fergal
    I installed the JDK by downloading the dmg from the Oracle website here: http://www.oracle.com/technetwork/java/javase/downloads/jdk7u9-downloads-1859576.html After installation I tried to setup the JDK in IntelliJ but when I set the location to the JDK in the Project Structure-SDKs screen, only a few libraries were loaded and many (including all libraries from Content/Classes/) were missing. How can I add all of the necessary libraries libraries? The install location for the JDK is /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home I've tried looking in /System/Library/Frameworks/JavaVM.framework/Versions/ to no avail.

    Read the article

  • How can I install VLC on RHEL 6.3?

    - by holddame
    I'm having a problem installing VLC on Red hat 6.3 When I try to use yum install vlc all goes well until it shows me this in the end: Error: Package: vlc-2.0.3-6.el6.x86_64 (linuxtech-release) Requires: libminizip.so.1()(64bit) Error: Package: liblrdf-0.5.0-2.el6.x86_64 (linuxtech-release) Requires: ladspa Error: Package: libffado-2.1.0-0.8.20120325.svn2088.el6.x86_64 (linuxtech-release) Requires: libconfig++.so.8()(64bit) also I can't use yum update I'm running on a 32-bit processor and I don't know what's wrong. ok I'v installed live555 and tried again nothing really happened here is my yum whatprovides *BasicUsageEnviroment `live555-devel-0-0.34.2012.01.25.el6.x86_64 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.34.2012.01.25.el6.i686 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2010.04.09.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2012.02.04.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment

    Read the article

  • How to Create a Queue

    - by regex
    Hello All, I have an application built that hits a third party company's web service in order to create an email account after a customer clicks a button. However, sometimes the web service takes longer than 1 minute to respond, which is way to long for my customers to be sitting there waiting for a response. I need to devise a way to set up some sort of queuing service external from the web site. This way I can add the web service action to the queue and advise the customer it may take up to 2 minutes to create the account. I'm curious of the best way to achieve this. My initial thought is to request the actions via a database table which will be checked on a regular basis by a Console app which is run via Windows Scheduled tasks. Any issues with that method? Is there a better method you can think of?

    Read the article

  • Why isn't my File.Move() working?

    - by yeahumok
    I'm not sure what exactly i'm doing wrong here...but i noticed that my File.Move() isn't renaming any files. Also, does anybody know how in my 2nd loop, i'd be able to populate my .txt file with a list of the path AND sanitized file name? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Text.RegularExpressions; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { //recurse through files. Let user press 'ok' to move onto next step string[] files = Directory.GetFiles(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); foreach (string file in files) { Console.Write(file + "\r\n"); } Console.WriteLine("Press any key to continue..."); Console.ReadKey(true); //End section //Regex -- find invalid chars string pattern = " *[\\~#%&*{}/<>?|\"-]+ *"; string replacement = " "; Regex regEx = new Regex(pattern); string[] fileDrive = Directory.GetFiles(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); List<string> filePath = new List<string>(); //clean out file -- remove the path name so file name only shows string result; foreach(string fileNames in fileDrive) { result = Path.GetFileName(fileNames); filePath.Add(result); } StreamWriter sw = new StreamWriter(@"C:\Documents and Settings\jane.doe\Desktop\~Test Folder for [SharePoint] %testing\File_Renames.txt"); //Sanitize and remove invalid chars foreach(string Files2 in filePath) { try { string sanitized = regEx.Replace(Files2, replacement); sw.Write(sanitized + "\r\n"); System.IO.File.Move(Files2, sanitized); System.IO.File.Delete(Files2); } catch (Exception ex) { Console.Write(ex); } } sw.Close(); } } } I'm VERY new to C# and trying to write an app that recurses through a specific drive, finds invalid characters (as specified in the RegEx pattern), removes them from the filename and then write a .txt file that has the path name and the corrected filename. Any ideas?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >