Search Results

Search found 5586 results on 224 pages for 'global illumination'.

Page 37/224 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • In the Groove: PASS Board Year 1, Q3

    - by Denise McInerney
    It's nine months into my first year on the PASS Board and I feel like I've found my rhythm. I've accomplished one of the goals I set out for the year and have made progress on others. Here's a recap of the last few months. Anti-Harassment Policy & Process Completed In April I began work on a Code of Conduct for the PASS Summit. The Board had several good discussions and various PASS members provided feedback. You can read more about that in this blog post. Since the document was focused on issues of harassment we renamed it the "Anti-Harassment Policy " and it was approved by the Board in August. The next step was to refine the guideliness and process for enforcement of the AHP. A subcommittee worked on this and presented an update to the Board at the September meeting. You can read more about that in this post, and you can find the process document here. Global Growth Expanding PASS' reach and making the organization relevant to SQL Server communities around the world has been a focus of the Board's work in 2012. We took the Global Growth initiative out to the community for feedback, and everyone on the Board participated, via Twitter chats, Town Hall meetings, feedback forums and in-person discussions. This community participation helped shape and refine our plans. Implementing the vision for Global Growth goes across all portfolios. The Virtual Chapters are well-positioned to help the organization move forward in this area. One outcome of the Global Growth discussions with the community is the expansion of two of the VCs from country-specific to language-specific. Thanks to the leadership in Brazil & Mexico for taking the lead here. I look forward to continued success for the Portuguese- and Spanish-language Virtual Chapters. Together with the Global Chinese VC PASS is off to a good start in making the VC's truly global. Virtual Chapters The VCs continue to grow and expand. Volunteers recently rebooted the Azure and Virutalization VCs, and a new  Education VC will be launching soon. Every week VCs offer excellent free training on a variety of topics. It's the dedication of the VC leaders and volunteers that make all this possible and I thank them for it. Board meeting The Board had an in-person meeting in September in San Diego, CA.. As usual we covered a number of topics including governance changes to support Global Growth, the upcoming Summit, 2013 events and the (then) upcoming PASS election. Next Up Much of the last couple of months has been focused on preparing for the PASS Summit in Seattle Nov. 6-9. I'll be there all week;  feel free to stop me if you have a question or concern, or just to introduce yourself.  Here are some of the places you can find me: VC Leaders Meeting Tuesday 8:00 am the VC leaders will have a meeting. We'll review some of the year's highlights and talk about plans for the next year Welcome Reception The VCs will be at the Welcome Reception in the new VC Lounge. Come by, learn more about what the VCs have to offer and meet others who share your interests. Exceptional DBA Awards Party I'm looking forward to seeing PASS Women in Tech VC leader Meredith Ryan receive her award at this event sponsored by Red Gate Session Presentation I will be presenting a spotlight session entitled "Stop Bad Data in Its OLTP Tracks" on Wednesday at 3:00 p.m. Exhibitor Reception This reception Wednesday evening in the Expo Hall is a great opportunity to learn more about tools and solutions that can help you in your job. Women in Tech Luncheon This year marks the 10th WIT Luncheon at PASS. I'm honored to be on the panel with Stefanie Higgins, Kevin Kline, Kendra Little and Jen Stirrup. This event is on Thursday at 11:30. Community Appreciation Party Thursday evening don't miss this event thanking all of you for everthing you do for PASS and the community. This year we will be at the Experience Music Project and it promises to be a fun party. Board Q & A Friday  9:45-11:15  am the members of the Board will be available to answer your questions. If you have a question for us, or want to hear what other members are thinking about, come by room 401 Friday morning.

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • PHP, since upgrading to 5.2.17 getting some warning ?

    - by Jules
    I can't reproduce this on my test server no idea why this is happening, other queries / functions work.. I'm getting this warning PHP Warning: mysql_connect() [<a href='function.mysql-connect'> function.mysql-connect</a>]: Can't connect to MySQL server on '--my isps server--' (10060) in D:\domains\mydomain.com\wwwroot\p hp\_stdfuncs.php on line 191 This function and others like it are having problems (but some are ok), this is my include file... function AddPageError($PageHandle, $Requested) { global $server; global $db; global $user; global $pass; global $sDebug; $con = mysql_connect($server,$user,$pass); I have an include file which sets those variables, as I say they work on other pages and functions.. No idea why ??

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • LINQ Query using UDF that receives parameters from the query

    - by Ben Fidge
    I need help using a UDF in a LINQ which calculates a users position from a fixed point. int pointX = 567, int pointY = 534; // random points on a square grid var q = from n in _context.Users join m in _context.GetUserDistance(n.posY, n.posY, pointX, pointY, n.UserId) on n.UserId equals m.UserId select new User() { PosX = n.PosX, PosY = n.PosY, Distance = m.Distance, Name = n.Name, UserId = n.UserId }; The GetUserDistance is just a UDF that returns a single row in a TVP with that users distance from the points deisgnated in pointX and pointY variables, and the designer generates the following for it: [global::System.Data.Linq.Mapping.FunctionAttribute(Name="dbo.GetUserDistance", IsComposable=true)] public IQueryable<GetUserDistanceResult> GetUserDistance([global::System.Data.Linq.Mapping.ParameterAttribute(Name="X1", DbType="Int")] System.Nullable<int> x1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="X2", DbType="Int")] System.Nullable<int> x2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y1", DbType="Int")] System.Nullable<int> y1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y2", DbType="Int")] System.Nullable<int> y2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="UserId", DbType="Int")] System.Nullable<int> userId) { return this.CreateMethodCallQuery<GetUserDistanceResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), x1, x2, y1, y2, userId); } when i try to compile i get The name 'n' does not exist in the current context

    Read the article

  • Why do my raytraced spheres have dark lines when lit with multiple light sources?

    - by Curyous
    I have a simple raytracer that only works back up to the first intersection. The scene looks OK with two different light sources, but when both lights are in the scene, there are dark shadows where the lit area from one ends, even if in the middle of a lit area from the other light source (particularly noticeable on the green ball). The transition from the 'area lit by both light sources' to the 'area lit by just one light source' seems to be slightly darker than the 'area lit by just one light source'. The code where I'm adding the lighting effects is: // trace lights for ( int l=0; l<primitives.count; l++) { Primitive* p = [primitives objectAtIndex:l]; if (p.light) { Sphere * lightSource = (Sphere *)p; // calculate diffuse shading Vector3 *light = [[Vector3 alloc] init]; light.x = lightSource.centre.x - intersectionPoint.x; light.y = lightSource.centre.y - intersectionPoint.y; light.z = lightSource.centre.z - intersectionPoint.z; [light normalize]; Vector3 * normal = [[primitiveThatWasHit getNormalAt:intersectionPoint] retain]; if (primitiveThatWasHit.material.diffuse > 0) { float illumination = DOT(normal, light); if (illumination > 0) { float diff = illumination * primitiveThatWasHit.material.diffuse; // add diffuse component to ray color colour.red += diff * primitiveThatWasHit.material.colour.red * lightSource.material.colour.red; colour.blue += diff * primitiveThatWasHit.material.colour.blue * lightSource.material.colour.blue; colour.green += diff * primitiveThatWasHit.material.colour.green * lightSource.material.colour.green; } } [normal release]; [light release]; } } How can I make it look right?

    Read the article

  • master pages, form pages, form runat=server > all onclick methods on master page?

    - by b0x0rz
    the problem i have is that i have multiple nested master pages: level 1: global (header, footer, login, navigation, etc...) level 2: specific (search pages, account pages, etc...) level 3: the page itself now, since only one form can have runat=server, i put the form at global page (so i can handle things like login, feedback, etc...). now with this solution i'd have to also put the for example level 3 (see above) methods, such as search also on the level 1 master page, but this will lead to this page being heavy (for development) with code from all places, even those that are used on a single page only (change email form for example). is there any way to delegate such methods (for example: ChangeEMail) from level 1 (global masterpage) to level 3 (the single page itself). to be even more clear: i want to NOT have to have the method ChangeEMail on the global master page code behind, but would like to 'MOVE' it somehow to the only page that will actually use it. the reason why it currently has to be on the global master is that global master has form runat=server and there can be only one of those per aspx page. this way it will be easier (more logical) to structure the code. thnx (hope i explained it correctly) have searched but did not find any general info on handling this case, usualy the answer is: have all the methods on the master page, but i don't like it. so ANY way of moving it to the specific page would be awesome. thnx

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • Remote interface lookup-problem in Glassfish3

    - by andersmo
    I have deployed a war-file, with actionclasses and a facade, and a jar-file with ejb-components (a stateless bean, a couple of entities and a persistence.xml) on glassfish3. My problem is that i cant find my remote interface to the stateless bean from my facade. My bean and interface looks like: @Remote public interface RecordService {... @Stateless(name="RecordServiceBean", mappedName="ejb/RecordServiceJNDI") public class RecordServiceImpl implements RecordService { @PersistenceContext(unitName="record_persistence_ctx") private EntityManager em;... and if i look in the server.log the portable jndi looks like: Portable JNDI names for EJB RecordServiceBean : [java:global/recordEjb/RecordServiceBean, java:global/recordEjb/RecordServiceBean!domain.service.RecordService]|#] and my facade: ...InitialContext ctx= new InitialContext(); try{ recordService = (RecordService) ctx.lookup("java:global/recordEjb/RecordServiceBean!domain.service.RecordService"); } catch(Throwable t){ System.out.println("ooops"); try{ recordService = (RecordService)ctx.lookup("java:global/recordEjb/RecordServiceImpl"); } catch(Throwable t2){ System.out.println("noooo!"); }... } and when the facade makes the first call this exception occur: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean!domain.service.RecordService' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] and the second call: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] I have also tested to inject the bean with the @EJB-annotation: @EJB(name="RecordServiceBean") private RecordService recordService; But that doesnt work either. What have i missed? I tried with an ejb-jar.xml but that shouldnt be nessesary. Is there anyone who can tell me how to fix this problem?

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • Static variable definition order in c++

    - by rafeeq
    Hi i have a class tools which has static variable std::vector m_tools. Can i insert the values into the static variable from Global scope of other classes defined in other files. Example: tools.h File class Tools { public: static std::vector<std::vector> m_tools; void print() { for(int i=0 i< m_tools.size() ; i++) std::cout<<"Tools initialized :"<< m_tools[i]; } } tools.cpp File std::vector<std::vector> Tools::m_tools; //Definition Using register class constructor for inserting the new string into static variable. class Register { public: Register(std::string str) { Tools::m_tools.pushback(str); } }; Different class which inserts the string to static variable in static variable first_tool.cpp //Global scope declare global register variable Register a("first_tool"); //////// second_tool.cpp //Global scope declare global register variable Register a("second_tool"); Main.cpp void main() { Tools abc; abc.print(); } Will this work? In the above example on only one string is getting inserted in to the static list. Problem look like "in Global scope it tries to insert the element before the definition is done" Please let me know is there any way to set the static definiton priority? Or is there any alternative way of doing the same.

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • Ruby on Rails / PostgreSQL - Library not Loaded error when starting server- libq.5.dylib

    - by Mike McCoy
    I have app that is running Ruby 1.9.2, Rails 3, and postgreSQL 8.3. It was originally setup and working with postgreSQL 9.1, but I uninstalled 9.1 and installed and changed to 8.3 insure compatibility on a Heroku shared database setup. It was running ok, but it's not now Now, when working on this app, when I run a database upgrade I get this error: dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle And when I try to run the server I get this error message: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `require': dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib (LoadError) Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `block in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler.rb:122:in `require' from /Users/michaeljmccoy/www/mikemccoy/config/application.rb:7:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `tap' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' I know they are very similar errors and probably has to do with a missing path. However, when I add the path to my .profile file and restart the terminal window, I get the same errors.

    Read the article

  • Running emacs in GNU Screen overrides .emacs settings for [home] key binding in FreeBSD 8.2

    - by javanix
    If I use the following .emacs file, I am able to go to the beginning/end of the current line using the home/end keys as I would expect. (keyboard-translate ?\C-h ?\C-?) (add-to-list 'load-path "/home/sam/programs/go/go/misc/emacs/" t) (require 'go-mode-load) (global-set-key [kp-home] 'beginning-of-line) ; [Home] (global-set-key [home] 'beginning-of-line) ; [Home] (global-set-key [kp-end] 'end-of-line) ; [End] (global-set-key [end] 'end-of-line) ; [End] However, if I open up a screen session it does not function like this (the [home] key still brings me to the beginning of the buffer for some reason). Here is my .screenrc file if anyone can spot anything funky in there: term xterm defutf8 on defflow off startup_message off # terminfo and termcap for nice 256 color terminal # allow bold colors - necessary for some reason attrcolor b ".I" # tell screen how to set colors. AB = background, AF=foreground termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' #use bash as the default login shell defshell -bash

    Read the article

  • Disable ProxyPass rules within a virtual host on apache 2

    - by chinto
    I have a global proxypass rule in httpd.conf rules at global level ProxyPass /test/css http://myserver:7788/test/css ProxyPassReverse /test/css http://myserver:7788/test/css and I have a virtual host Listen localhost:7788 NameVirtualHost localhost:7788 <VirtualHost localhost:7788> Alias /test/css/ "C:/jboss/server/default/deploy/test.ear/test-web-app.war/css/" </VirtualHost> I would like to disable all global proxypass rules applying in this virtual host? NoProxy doesn't seem to work. (The reason I would like to do this is I have below global rules which create a 502 proxy loop if applied within this virtual host #pass all requests to application server ProxyPass /test http://localhost:8080/test ProxyPassReverse /test http://localhost:8080/test ) What I'm trying to do is, serve all static content (like css) using apache, while still proxying all the rest of requests to the application server.

    Read the article

  • How to manage groups and users in Jenkins

    - by Michael
    I'm trying to use role based security plugin in Jenkins, but i'm not sue i am using it right. I've decided to go with jenkin's own user database as a security realm instead of LDAP. i'm adding the users one by one. Now in the Assign Roles screen, i have global roles like administrator, read only etc... and i have project specific roles like prod_a_developer, prod_b_developer... For each user, do i have to both assign one of the global roles for him and also assign a specific project role ? Also, how do i assign a user to a group ? instead of assigning each user a global role i want to assign a group a global role. not so trivial, Can someone please help me ? Thanks.

    Read the article

  • Disable ProxyPass rules within a virtual host on apache 2

    - by chinto
    I have a global proxypass rule in httpd.conf rules at global level ProxyPass /test/css http://myserver:7788/test/css ProxyPassReverse /test/css http://myserver:7788/test/css and I have a virtual host Listen localhost:7788 NameVirtualHost localhost:7788 <VirtualHost localhost:7788> Alias /test/css/ "C:/jboss/server/default/deploy/test.ear/test-web-app.war/css/" </VirtualHost> I would like to disable all global proxypass rules applying in this virtual host? NoProxy doesn't seem to work. (The reason I would like to do this is I have below global rules which create a 502 proxy loop if applied within this virtual host #pass all requests to application server ProxyPass /test http://localhost:8080/test ProxyPassReverse /test http://localhost:8080/test ) What I'm trying to do is, serve all static content (like css) using apache, while still proxying all the rest of requests to the application server.

    Read the article

  • Multiple IPs on firewall, are these virtual interfaces or what?

    - by Jakobud
    We have 5 static IP addresses from our ISP: XXX.XXX.XXX.180 XXX.XXX.XXX.181 XXX.XXX.XXX.182 XXX.XXX.XXX.183 XXX.XXX.XXX.184 On our firewall box, the NIC that is connected to our cable modem, appears to have all 5 IP addresses set on it. A previous IT guy set this thing up, and I'm not sure exactly what he did. Are these virtual interfaces on this NIC or what? Here is my ip addr output for that NIC: rwd0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether XX:XX:XX:XX:XX:XX brd ff:ff:ff:ff:ff:ff inet XXX.XXX.XXX.180/24 brd XXX.XXX.XXX.186 scope global rwd0 inet XXX.XXX.XXX.181/29 brd XXX.XXX.XXX.186 scope global rwd0:FWB9 inet XXX.XXX.XXX.182/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB10 inet XXX.XXX.XXX.183/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB11 inet XXX.XXX.XXX.184/29 brd XXX.XXX.XXX.186 scope global secondary rwd0:FWB12 inet6 fe80::250:8bff:fe61:5734/64 scope link valid_lft forever preferred_lft forever I'm a bit new to firewalls and networking so I'm just trying to figure out what he had going on here. I know he used Firewall Builder to configure the iptables rules, maybe that has something to do with the "FWB" I see in those names? So my questions are: What is going on here? Virtual Interfaces? Or something else? If we want to put in a second firewall in parallel with this firewall but we only want it to handle traffic to XXX.XXX.XXX.182, how do we get rid of the static XXX.XXX.XXX.182 address on this existing firewall box?

    Read the article

  • execute a command in all subdirectories bash

    - by Luigi R. Viggiano
    I have a directory structure composed by: iTunes/Music/${author}/${album}/${song.mp3} I implemented a script to strip my mp3 bitrate to 128 kbps using lame (which works on a single file at time). My script looks like this 'normalize_mp3.sh': #!/bin/bash SAVEIFS=$IFS IFS=$(echo -en "\n\b") for f in *.mp3 do lame --cbr $f __out.mp3 mv __out.mp3 $f done IFS=$SAVEIFS This works fine, if I go folder by folder and execute this command. But I'd like to have a "global" command, like in 4DOS so I can run: $ cd iTunes/Music $ global normalize_mp3.sh and the global command would traverse all subdirs and execute the normalize_mp3.sh to strip all my mp3 in all subfolders. Anyone knows if there is a unix equivalent to the 4dos global command? I tried to play with find -exec but I just managed to get an headache.

    Read the article

  • Cloudify: bootstrap-localcloud: operation failed?

    - by quanta
    OS: Gentoo, CentOS Version: 2.1.0 Follow the quick start guide, I got the below error when running bootstrap-localcloud: cloudify@default> bootstrap-localcloud STARTING CLOUDIFY MANAGEMENT 2012-05-30 14:55:50,396 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; \ Caused by: org.cloudifysource.shell.commands.CLIException: \ Error while starting agent. \ Please make sure that another agent is not already running. Operation failed. What port Cloudify is using to check that agent is running? PS: it's working fine when running on Windows. UPDATE: Wed May 30 22:37:30 ICT 2012 Reply to @tamirkorem and @Itai Frenkel: I'm pretty sure because this is the first time I run that command on 2 servers. More clearly, here're the output: cloudify@default> teardown-localcloud Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? 2012-05-30 22:43:33,145 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown failed. Failed to fetch the currently deployed applications list. For force teardown use the -force flag. Operation failed. cloudify@default> teardown-localcloud -force Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? Failed to fetch the currently deployed applications list. Continuing teardown-localcloud. .2012-05-30 22:46:39,040 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown aborted, an agent was not found on the local machine. Operation failed. and this one is the detailed result: cloudify@default> bootstrap-localcloud --verbose NIC Address=127.0.0.1 Lookup Locators=127.0.0.1:4172 Lookup Groups=localcloud Starting agent and management processes: gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 STARTING CLOUDIFY MANAGEMENT 2012-05-30 22:36:12,870 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; Caused by: org.cloudifysource.shell.commands.CLIException: Error while starting agent. Please make sure that another agent is not already running. Command executed: /usr/local/src/gigaspaces-cloudify-2.1.0-ga/bin/gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 Reply to @Eliran Malka: there is no such process listening on port 4172: # netstat --protocol=inet -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2363/tor tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2331/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2293/cupsd

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >