Search Results

Search found 15456 results on 619 pages for 'global temporary tables'.

Page 109/619 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • How to make a GRANT persist for a table that's being dropped and re-created?

    - by Eli Courtwright
    I'm on a fairly new project where we're still modifying the design of our Oracle 11g database tables. As such, we drop and re-create our tables fairly often to make sure that our table creation scripts work as expected whenever we make a change. Our database consists of 2 schemas. One schema has some tables with INSERT triggers which cause the data to sometimes be copied into tables in our second schema. This requires us to log into the database with an admin account such as sysdba and GRANT access to the first schema to the necessary tables on the second schema, e.g. GRANT ALL ON schema_two.SomeTable TO schema_one; Our problem is that every time we make a change to our database design and want to drop and re-create our database tables, the access we GRANT-ed to schema_one went away when the table was dropped. Thus, this creates another annoying step wherein we must log in with an admin account to re-GRANT the access every time one of these tables is dropped and re-created. This isn't a huge deal, but I'd love to eliminate as many steps as possible from our development and testing procedures. Is there any way to GRANT access to a table in such a way that the GRANT-ed permissions survive a table being dropped and then re-created? And if this isn't possible, then is there a better way to go about this?

    Read the article

  • Export Multiple Crystal Reports ASP.NET

    - by AProgrammer
    Hey all, I want to export 2 different reports when I click an Export button. The problem is the routine only fires once and I only get one report to print out. Am I doing something wrong? I think it has something to do with the HTTPResponse, but I'm not sure. Here's my code: Dim badgeSize As Integer = 0 'Drop Down selection Dim badgeData As New DataSet 'Visitor Badge Data Dim badgeEmployeeData As New DataSet 'Employee Badge Data Dim badgeTotals As Integer = 0 'Totals for both badgeSize = ddlBadgeSize.SelectedValue ' Get Visitor Data badgeData = _DatabaseAccess.GetProjectReportData(sessionInfo.myEventID, sessionInfo.EventCreator) ' Get Employee Data badgeEmployeeData = _DatabaseAccess.GetProjectReportEmployeeData(sessionInfo.myEventID, sessionInfo.EventCreator) 'Obtain Totals badgeTotals = badgeData.Tables(0).Rows.Count + badgeEmployeeData.Tables(0).Rows.Count If badgeTotals = 0 Then ShowMessage("There are no badges to print.") Exit Sub End If If badgeSize.Equals(0) Then 'Small If badgeEmployeeData.Tables(0).Rows.Count > 0 Then If badgeEmployeeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeEmployeeData, "Employee", badgeSize) Else PrintStandardDymo(badgeEmployeeData, "Employee", 1) End If End If If badgeData.Tables(0).Rows.Count > 0 Then If badgeData.Tables(0).Rows.Count >= 6 Then PrintProjectBadges(badgeData, "Visitor", badgeSize) Else PrintStandardDymo(badgeData, "Visitor", 1) End If End If else 'do somethign else endif And the Report Code: Private Sub PrintProjectBadges(ByVal theData As DataSet, ByVal badgeType As String, ByVal badgeSize As Integer) Dim ourReport As New ReportDocument Dim crConnectionInfo As New ConnectionInfo(SetCrystalConnection) If badgeSize = 0 Then Try If badgeType = "Visitor" Then ourReport.Load(Server.MapPath("SmallProjectBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE Else ourReport.Load(Server.MapPath("SmallProjectEmployeeBadge.rpt"), OpenReportMethod.OpenReportByDefault) 'LIVE SERVER USE End If Catch ex As Exception Dim TraceList As New ArrayList TraceList.Add("DBLog") DatabaseAccess.WriteToErrorLog("Visitor Registration", "Printing Project Badges", ex.Message, TraceEventType.Information, 1, TraceList) Exit Sub End Try ourReport.SetDataSource(theData.Tables("Project")) Else 'Do somethign else... End If Response.Buffer = True 'Clear the response content and headers Response.ClearContent() Response.ClearHeaders() SetLogon(ourReport, crConnectionInfo) 'Export the Report to Response stream in PDF format and file name Customers ourReport.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "Visitor_Badges") Response.End() 'Response.Close() End Sub Any Help would be much appreciated.

    Read the article

  • MS Access to sql server searching

    - by malou17
    How to use this code if we are going to use sql server database becaUSE in this code we used MS Access as the database private void btnSearch_Click(object sender, System.EventArgs e) { String pcode = txtPcode.Text; int ctr = productsDS1.Tables[0].Rows.Count; int x; bool found = false; for (x = 0; x<ctr; x++) { if (productsDS1.Tables[0].Rows[x][0].ToString() == pcode) { found = true; break; } } if (found == true) { txtPcode.Text = productsDS1.Tables[0].Rows[x][0].ToString(); txtDesc.Text = productsDS1.Tables[0].Rows[x][1].ToString(); txtPrice.Text = productsDS1.Tables[0].Rows[x][2].ToString(); } else { MessageBox.Show("Record Not Found"); } private void btnNew_Click(object sender, System.EventArgs e) { int cnt = productsDS1.Tables[0].Rows.Count; string lastrec = productsDS1.Tables[0].Rows[cnt][0].ToString(); int newpcode = int.Parse(lastrec) + 1; txtPcode.Text = newpcode.ToString(); txtDesc.Clear(); txtPrice.Clear(); txtDesc.Focus(); here's the connectionstring Jet OLEDB:Global Partial Bulk Ops=2;Jet OLEDB:Registry Path=;Jet OLEDB:Database Locking Mode=0;Data Source="J:\2009-2010\1st sem\VC#\Sample\WindowsApplication_Products\PointOfSales.mdb"

    Read the article

  • LINQ Query using UDF that receives parameters from the query

    - by Ben Fidge
    I need help using a UDF in a LINQ which calculates a users position from a fixed point. int pointX = 567, int pointY = 534; // random points on a square grid var q = from n in _context.Users join m in _context.GetUserDistance(n.posY, n.posY, pointX, pointY, n.UserId) on n.UserId equals m.UserId select new User() { PosX = n.PosX, PosY = n.PosY, Distance = m.Distance, Name = n.Name, UserId = n.UserId }; The GetUserDistance is just a UDF that returns a single row in a TVP with that users distance from the points deisgnated in pointX and pointY variables, and the designer generates the following for it: [global::System.Data.Linq.Mapping.FunctionAttribute(Name="dbo.GetUserDistance", IsComposable=true)] public IQueryable<GetUserDistanceResult> GetUserDistance([global::System.Data.Linq.Mapping.ParameterAttribute(Name="X1", DbType="Int")] System.Nullable<int> x1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="X2", DbType="Int")] System.Nullable<int> x2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y1", DbType="Int")] System.Nullable<int> y1, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="Y2", DbType="Int")] System.Nullable<int> y2, [global::System.Data.Linq.Mapping.ParameterAttribute(Name="UserId", DbType="Int")] System.Nullable<int> userId) { return this.CreateMethodCallQuery<GetUserDistanceResult>(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), x1, x2, y1, y2, userId); } when i try to compile i get The name 'n' does not exist in the current context

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • master pages, form pages, form runat=server > all onclick methods on master page?

    - by b0x0rz
    the problem i have is that i have multiple nested master pages: level 1: global (header, footer, login, navigation, etc...) level 2: specific (search pages, account pages, etc...) level 3: the page itself now, since only one form can have runat=server, i put the form at global page (so i can handle things like login, feedback, etc...). now with this solution i'd have to also put the for example level 3 (see above) methods, such as search also on the level 1 master page, but this will lead to this page being heavy (for development) with code from all places, even those that are used on a single page only (change email form for example). is there any way to delegate such methods (for example: ChangeEMail) from level 1 (global masterpage) to level 3 (the single page itself). to be even more clear: i want to NOT have to have the method ChangeEMail on the global master page code behind, but would like to 'MOVE' it somehow to the only page that will actually use it. the reason why it currently has to be on the global master is that global master has form runat=server and there can be only one of those per aspx page. this way it will be easier (more logical) to structure the code. thnx (hope i explained it correctly) have searched but did not find any general info on handling this case, usualy the answer is: have all the methods on the master page, but i don't like it. so ANY way of moving it to the specific page would be awesome. thnx

    Read the article

  • What is the best way to handle the Connections to MySql from c#

    - by srk
    I am working on a c# application which connects to MySql server. There are about 20 functions which will connect to database. This application will be deployed in 200 over machines. I am using the below code to connect to my database which is identical for all the functions. The problem is, i can some connections were not closed and still alive when deployed in 200 over machines. Connection String : <add key="Con_Admin" value="server=test-dbserver; database=test_admindb; uid=admin; password=1Password; Use Procedure Bodies=false;" /> Declaration of the connection string Globally in application [Global.cs] : public static MySqlConnection myConn_Instructor = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); Function to query database : public static DataSet CheckLogin_Instructor(string UserName, string Password) { DataSet dsValue = new DataSet(); //MySqlConnection myConn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); try { string Query = "SELECT accounts.str_nric AS Nric, accounts.str_password AS `Password`," + " FROM accounts " + " WHERE accounts.str_nric = '" + UserName + "' AND accounts.str_password = '" + Password + "\'"; MySqlCommand cmd = new MySqlCommand(Query, Global.myConn_Instructor); MySqlDataAdapter da = new MySqlDataAdapter(); if (Global.myConn_Instructor.State == ConnectionState.Closed) { Global.myConn_Instructor.Open(); } cmd.ExecuteScalar(); da.SelectCommand = cmd; da.Fill(dsValue); Global.myConn_Instructor.Close(); } catch (Exception ex) { Global.myConn_Instructor.Close(); ExceptionHandler.writeToLogFile(System.Environment.NewLine + "Target : " + ex.TargetSite.ToString() + System.Environment.NewLine + "Message : " + ex.Message.ToString() + System.Environment.NewLine + "Stack : " + ex.StackTrace.ToString()); } return dsValue; }

    Read the article

  • Remote interface lookup-problem in Glassfish3

    - by andersmo
    I have deployed a war-file, with actionclasses and a facade, and a jar-file with ejb-components (a stateless bean, a couple of entities and a persistence.xml) on glassfish3. My problem is that i cant find my remote interface to the stateless bean from my facade. My bean and interface looks like: @Remote public interface RecordService {... @Stateless(name="RecordServiceBean", mappedName="ejb/RecordServiceJNDI") public class RecordServiceImpl implements RecordService { @PersistenceContext(unitName="record_persistence_ctx") private EntityManager em;... and if i look in the server.log the portable jndi looks like: Portable JNDI names for EJB RecordServiceBean : [java:global/recordEjb/RecordServiceBean, java:global/recordEjb/RecordServiceBean!domain.service.RecordService]|#] and my facade: ...InitialContext ctx= new InitialContext(); try{ recordService = (RecordService) ctx.lookup("java:global/recordEjb/RecordServiceBean!domain.service.RecordService"); } catch(Throwable t){ System.out.println("ooops"); try{ recordService = (RecordService)ctx.lookup("java:global/recordEjb/RecordServiceImpl"); } catch(Throwable t2){ System.out.println("noooo!"); }... } and when the facade makes the first call this exception occur: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean!domain.service.RecordService' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] and the second call: javax.naming.NamingException: Lookup failed for 'java:global/recordEjb/RecordServiceBean' in SerialContext [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacedomain.service.RecordService [Root exception is java.lang.ClassNotFoundException: domain.service.RecordService]] I have also tested to inject the bean with the @EJB-annotation: @EJB(name="RecordServiceBean") private RecordService recordService; But that doesnt work either. What have i missed? I tried with an ejb-jar.xml but that shouldnt be nessesary. Is there anyone who can tell me how to fix this problem?

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • Stored procedure for generic MERGE

    - by GilliVilla
    I have a set of 10 tables in a database (DB1). And there are 10 tables in another database (DB2) with exact same schema on the same SQL Server 2008 R2 database server machine. The 10 tables in DB1 are frequently updated with data. I intend to write a stored procedure that would run once every day for synchronizing the 10 tables in DB1 with DB2. The stored procedure would make use of the MERGE statement. Now, my aim is to make this as generic and parametrized as possible. That is, accommodate for more tables down the line... and accommodate different source and target DB names. Definitely no hard coding is intended. This is my algorithm so far: Have the database names as parameters Have the first query within the stored procedure... result in giving the names of the 10 tables from a lookup table (this can be 10, 20 or whatever) Have a generic MERGE statement that does the sync for each of the above set of tables (based on primary key?) This is where I need more inputs on. What is the best way to achieve this stored procedure? SQL syntax would be helpful.

    Read the article

  • Can it be done?

    - by bzarah
    We are in design phase of a project whose goal is replatforming an ASP classic application to ASP.Net 4.0. The system needs to be entirely web based. There are several new requirements for the new system that make this a challenging project: The system needs to be database independent. It must, at version 1.0, support MS SQL Server, Oracle, MySQL, Postgres and DB2. The system must be able to allow easy reporting from the database by third party reporting packages. The system must allow an administrative end user to create their own tables in the database through the web based interface. The system must allow an administrative end user to design/configure a user interface (web based) where they can select tables and fields in the system (either our system's core tables or their own custom tables created in #3) The system must allow an administrative end user to create and maintain relationships between these custom created tables, and also between these tables and our system's core tables. The system must allow an administrative end user to create business rules that will enforce validation, show/hide UI elements, block certain actions based on the identity of specific users, specific user groups or privileges. Essentially it's a system that has some core ticket tracking functionality, but allows the end user to extend the interface, business rules and the database. Is this possible to build in a .Net, Web based environment? If so, what do you think the level of effort would be to get this done? We are currently a 6 person shop, with 2.5 full time developers.

    Read the article

  • Static variable definition order in c++

    - by rafeeq
    Hi i have a class tools which has static variable std::vector m_tools. Can i insert the values into the static variable from Global scope of other classes defined in other files. Example: tools.h File class Tools { public: static std::vector<std::vector> m_tools; void print() { for(int i=0 i< m_tools.size() ; i++) std::cout<<"Tools initialized :"<< m_tools[i]; } } tools.cpp File std::vector<std::vector> Tools::m_tools; //Definition Using register class constructor for inserting the new string into static variable. class Register { public: Register(std::string str) { Tools::m_tools.pushback(str); } }; Different class which inserts the string to static variable in static variable first_tool.cpp //Global scope declare global register variable Register a("first_tool"); //////// second_tool.cpp //Global scope declare global register variable Register a("second_tool"); Main.cpp void main() { Tools abc; abc.print(); } Will this work? In the above example on only one string is getting inserted in to the static list. Problem look like "in Global scope it tries to insert the element before the definition is done" Please let me know is there any way to set the static definiton priority? Or is there any alternative way of doing the same.

    Read the article

  • Error while setting UserAcces permission for WebForms?

    - by ksg
    I've created a class named BaseClass.cs and I've written a function in its constructor. Here's how it looks public class BasePage:Page { public BasePage() { setUserPermission(); } private void setUserPermission() { String strPathAndQuery = HttpContext.Current.Request.Url.PathAndQuery; string strulr = strPathAndQuery.Replace("/SGERP/", "../"); Session["Url"] = strulr; GEN_FORMS clsForm = new GEN_FORMS(); clsForm.Form_Logical_Name = Session["Url"].ToString(); clsForm.User_ID = Convert.ToInt32(Session["User_ID"]); DataSet dsPermission = clsForm.RETREIVE_BUTTON_PERMISSIONS(); if (dsPermission.Tables.Count > 0) { if (dsPermission.Tables[1].Rows.Count > 0) { Can_Add = Convert.ToBoolean(dsPermission.Tables[1].Rows[0]["Can_Add"].ToString()); Can_Delete = Convert.ToBoolean(dsPermission.Tables[1].Rows[0]["Can_Delete"].ToString()); Can_Edit = Convert.ToBoolean(dsPermission.Tables[1].Rows[0]["Can_Edit"].ToString()); Can_Print = Convert.ToBoolean(dsPermission.Tables[1].Rows[0]["Can_Print"].ToString()); Can_View = Convert.ToBoolean(dsPermission.Tables[1].Rows[0]["Can_Print"].ToString()); } } } } I've inherited this class on my webform so that when the page loads, the setUserPermission function is executed. My webpage looks like this public partial class Setting_CompanyDetails : BasePage My problem is that I cannot access Session["Url"] in my BasePage. I'm getting the following error Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the <configuration>\<system.web>\<httpModules> section in the application configuration. How can I solve this issue? Is this the right way to set UserPermission access?

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • Ruby on Rails / PostgreSQL - Library not Loaded error when starting server- libq.5.dylib

    - by Mike McCoy
    I have app that is running Ruby 1.9.2, Rails 3, and postgreSQL 8.3. It was originally setup and working with postgreSQL 9.1, but I uninstalled 9.1 and installed and changed to 8.3 insure compatibility on a Heroku shared database setup. It was running ok, but it's not now Now, when working on this app, when I run a database upgrade I get this error: dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle And when I try to run the server I get this error message: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `require': dlopen(/Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle, 9): Library not loaded: libpq.5.dylib (LoadError) Referenced from: /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle Reason: no suitable image found. Did find: /usr/lib/libpq.5.dylib: no matching architecture in universal wrapper - /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg_ext.bundle from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/pg-0.12.2/lib/pg.rb:4:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:68:in `block (2 levels) in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:66:in `block in require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `each' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler/runtime.rb:55:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290@global/gems/bundler-1.0.21/lib/bundler.rb:122:in `require' from /Users/michaeljmccoy/www/mikemccoy/config/application.rb:7:in `<top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `require' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:53:in `block in <top (required)>' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `tap' from /Users/michaeljmccoy/.rvm/gems/ruby-1.9.2-p290/gems/railties-3.2.0.rc2/lib/rails/commands.rb:50:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' I know they are very similar errors and probably has to do with a missing path. However, when I add the path to my .profile file and restart the terminal window, I get the same errors.

    Read the article

  • Running emacs in GNU Screen overrides .emacs settings for [home] key binding in FreeBSD 8.2

    - by javanix
    If I use the following .emacs file, I am able to go to the beginning/end of the current line using the home/end keys as I would expect. (keyboard-translate ?\C-h ?\C-?) (add-to-list 'load-path "/home/sam/programs/go/go/misc/emacs/" t) (require 'go-mode-load) (global-set-key [kp-home] 'beginning-of-line) ; [Home] (global-set-key [home] 'beginning-of-line) ; [Home] (global-set-key [kp-end] 'end-of-line) ; [End] (global-set-key [end] 'end-of-line) ; [End] However, if I open up a screen session it does not function like this (the [home] key still brings me to the beginning of the buffer for some reason). Here is my .screenrc file if anyone can spot anything funky in there: term xterm defutf8 on defflow off startup_message off # terminfo and termcap for nice 256 color terminal # allow bold colors - necessary for some reason attrcolor b ".I" # tell screen how to set colors. AB = background, AF=foreground termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm' #use bash as the default login shell defshell -bash

    Read the article

  • Disable ProxyPass rules within a virtual host on apache 2

    - by chinto
    I have a global proxypass rule in httpd.conf rules at global level ProxyPass /test/css http://myserver:7788/test/css ProxyPassReverse /test/css http://myserver:7788/test/css and I have a virtual host Listen localhost:7788 NameVirtualHost localhost:7788 <VirtualHost localhost:7788> Alias /test/css/ "C:/jboss/server/default/deploy/test.ear/test-web-app.war/css/" </VirtualHost> I would like to disable all global proxypass rules applying in this virtual host? NoProxy doesn't seem to work. (The reason I would like to do this is I have below global rules which create a 502 proxy loop if applied within this virtual host #pass all requests to application server ProxyPass /test http://localhost:8080/test ProxyPassReverse /test http://localhost:8080/test ) What I'm trying to do is, serve all static content (like css) using apache, while still proxying all the rest of requests to the application server.

    Read the article

  • OnExit is not entering via PostSharp in asp.net project.

    - by mark smith
    Hi there, I have setup PostSharp and it appears to be working but i don't get it entering OnExit (i have logged setup to ensure it is working) ... Its a bit tricky to configure with asp.net - or is it just me ... I am using the 1.5 new version I basically have the following in my web.config and i had to add the SearchPath otherwise it can't find my assemblies <postsharp directory="C:\Program Files\PostSharp 1.5" trace="true"> <parameters> <!--<add name="parameter-name" value="parameter-value"/>--> </parameters> <searchPath> <!-- Always add the binary folder to the search path. --> <add name="bin" value="~\bin"/> </searchPath> </postsharp> I have set tracing on but what is strange to me is that it appears to build to the temp directory, maybe this is my issue, i am unsure .. hence i do F5 ... Is it possible to name the Output directory and output file?? As you can see it is editing a DLL in the temp dir so IIS is no longer in control so it doesn't execute it ??? Confused! :-) C:\Program Files\PostSharp 1.5\postsharp.exe "/P:Output=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.dll" "/P:IntermediateDirectory=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp " /P:CleanIntermediate=False /P:ReferenceDirectory=. /P:SignAssembly=False /P:PrivateKeyLocation= /P:ResolvedReferences= "/P:SearchPath=C:\Source Code\Visual Studio 2008\Projects\mysitemvc\mysitemvc\bin," /V /SkipAutoUpdate "C:\Program Files\PostSharp 1.5\Default.psproj" "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\before-postsharp\App_Web_04ae3ewy.dll" PostSharp 1.5 [1.5.6.627] - Copyright (c) Gael Fraiteur, 2005-2009. info PS0035: C:\Windows\Microsoft.NET\Framework\v2.0.50727\ilasm.exe "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.il" /QUIET /DLL /PDB "/RESOURCE=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.res" "/OUTPUT=C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\mysitemvc-1.2\c2087140\8ac2dc93\postsharp\App_Web_04ae3ewy.dll" /SUBSYSTEM=3 /FLAGS=1 /BASE=18481152 /STACK=1048576 /ALIGNMENT=512 /MDV=v2.0.50727

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >