Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 655/1212 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Why does a change of Session State provider lead to an ASPx page yielding garbage?

    - by Rory Becker
    I have an aspnet webapp which has worked very well up until now. I was recently asked to explore ways of making it scale better. I found that seperation of database and Webapp would help. Further I was told that if I changed my session providing mechanism to SQLServer, I would be able to duplicate the Web Stack to several machines which could each call back to the state server allowing the load to be distirbuted better. This sounds logical. So I created an ASPState database using ASPNet_RegSQL.exe as detailed in many locations across the web and changed the web.config on my app from: <sessionState mode="InProc" cookieless="false" timeout="20" /> To: <sessionState mode="SQLServer" sqlConnectionString="Server=SomeSQLServer;user=SomeUser;password=SomePassword" cookieless="false" timeout="20" /> Then I addressed my app, which presented me with its logon screen and I duly logged in. Once in I was presented, not with the page I was expecting, but with: I can change the sessionstate back and forth. This problem goes away and then comes back based on which set of configuration I use. Why is this happening?

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • JPA 2.0 Provider Hibernate

    - by Rooh
    I have very strange problem we are using jpa 2.0 with hibernate annotations based Database generated through JPA DDL is true and MySQL as Database; i will provide some reference classes and then my porblem. @MappedSuperclass public abstract class Common implements serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; @ManyToOne @JoinColumn private Address address; //with all getter and setters //as well equal and hashCode } @Entity public class Parent extends Common{ private String name; @OneToMany(cascade = {CascadeType.MERGE,CascadeType.PERSIST}, mappedBy = "parent") private List<Child> child; //setters and rest of class } @Entity public class Child extends Common{ //some properties with getter/setters } @Entity public class Address implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; private String street; //rest of class with get/setter } as in code you can see that parents and child classes extends Common class so both have address property and id , the problem occurs when change the address refference in parent class it reflect same change in all child objects in list and if change address refference in child class then on merge it will change address refference of parent as well i am not able to figure out is it is problem of jpa or hibernate

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Do you have health checks in your web app or web site?

    - by Pekka
    I have built PHP based "health check" scripts for several projects, but they were always custom-made for the occasion and not written for abstraction as an independent product. I would like to know whether such a solution exists. What I meam by "health check" is a protected web page that functions much like a suite of unit tests, but on a more operational level, showing red/yellow/green statuses for things like Are the cache directories writable? Is the PHP version correct, are required extensions installed? Is the configuration file protected from writing? Is the database server reachable? Do the key tables exist in the database? Is there enough disk space available? Is the site's front page reachable and renders fully ( = no PHP errors)? Do the project's libraries' MD5 checksums match the original ones? Do you do this - or parts of it - in your applications and web sites? Are there any standardized tools for this that bring along all the functionality to perform the tests (ideally as plugins), and just need to be configured accordingly? Is there a way to set this up using one of the Unit Testing frameworks available for PHP (preferably PHPUnit)? If so, do you know any resources / tutorials outlining how?

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • Moving front page (cursebird or foursquare)

    - by Dan Samuels
    Ok so I'll be honest, I have a good amount of experience with php/mysql, I've just started learning jQuery and I've done very little, but some with ajax. So using the terms ajax/jquery interchangeably are a bit confusing to me. Anyway as the title suggest I have a website with 5 items, and I want them to move (meaning, if a more recent one is entered, remove the last item, and put the new one on top), they are 5 of the most recent items in the database table, now I've coded jquery as a test so it fades out the last one, the whole thing moves down, makes room at the top, and fades in a new one. However, it's a test and has 0 interaction with the database, the one that fades in is just in a hidden div. So the jQuery part is taken care of. So I'm unsure how to go about this, I was thinking maybe have ajax check a website off the page that has those 5 items in raw format, and if they change then to refresh? Not looking for a "plz code 4 me" answer, just the concept of how it would work, or some links to get off to the right start. edit - Also, the 5 items are ranked, so if I click item 3 I need it to move above item 2 refreshlessly, so this causes a whole other issue I assume.

    Read the article

  • From Sinatra Base object. Get port of application including the base object

    - by Poul
    I have a Sinatra::Base object that I would like to include in all of my web apps. In that base class I have the configure method which is called on start-up. I would like that configure code to 'register' that service with a centralized database. The information that needs to be sent when registering is the information on how to contact this web-service... things like host and port. I then plan on having a monitoring service that will spin over all registered services and occasionally ping them to make sure they are still up and running. In the configure method I am having trouble getting the port information. The 'self.settings.port' variable doesn't seem to work in this method. a) any ideas on how to get the port? I have the host. b) is there a sinatra plug-in that already does something like this so I don't have to write it myself? :-) //in my Sinatra::Base code. lets call it register_me.rb RegisterMe < Sinatra::Base configure do //save host and port information to database end get '/check_status' //return status end //in my web service code require register_me //at this point, sinatra will initialize the RegisterMe object and call configure post ('/blah') //example of a method for this particular web service end

    Read the article

  • C# web service, MySql encoding problem

    - by Boban
    I use C# web service to insert, delete and get data from MySql database. The problem is some of the data is in Macedonian (Cyrilic). When I insert directly in the database, it inserts ok. For example: "???" is "???". When I insert throgh the service, it's not. For example: "???" is "???". When I try to get data throug the service, it gets it ok. What's the problem with the inserting? Here is part of my code for inserting: MySqlConnection connection = new MySqlConnection(MyConString); MySqlCommand command = connection.CreateCommand(); command.CommandText = "INSERT INTO user (id_user, name VALUES (NULL, ?name);"; command.Parameters.Add("?name", MySqlDbType.VarChar).Value = name; connection.Open(); command.ExecuteReader(); connection.Close(); return thisrow; Tnq U in advance!!!

    Read the article

  • entity framework vNext wish list

    - by Fred Yang
    I have been intensively studying and use ef4 in my project. I do feel the improvement that it has over version 1. But I found that I have something I cannot get around easily. Here is a list I want it to be better in ef vNext. the model designer should allow multiple view of the same model, so that I don't need cram all my entity into a single view. respect user's manual edit of edmx. Currently, the some database view object simply can not be imported to the model because the designer "smartly" think that the view does not have a primary key, so that I have to manually edit the edmx to correct designer's behavior. But in the next "update from database" task, designer will revert my customization. For now, I simply fallback to manually edit the edmx file at all, or I have to use compare tool to keep the new update, and rollback and put the new update into my old edmx file manually. Designer should be improved to allow default behavior and user's manual control. I want control not to let the designer refresh the change of imported object. support user defined table function. linq is about Composability, stored proc dos not support composability. I wish I could use user defined table function which support this. What are you wishes for EF vNext?

    Read the article

  • Using javascript and php together

    - by EmmyS
    I have a PHP form that needs some very simple validation on submit. I'd rather do the validation client-side, as there's quite a bit of server-side validation that happens to deal with writing form values to a database. So I just want to call a javascript function onsubmit to compare values in two password fields. This is what I've got: function validate(form){ var password = form.password.value; var password2 = form.password2.value; alert("password:"+password+" password2:" + password2); if (password != password2) { alert("not equal"); document.getElementByID("passwordError").style.display="inline"; return false; } alert("equal"); return true; } The idea being that a default-hidden div containing an error message would be displayed if the two passwords don't match. The alerts are just to display the values of password and password2, and then again to indicate whether they match or not (will not be used in production code). I'm using an input type=submit button, and calling the function in the form tag: <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" onsubmit="return validate(this);"> Everything is alerting as expected when entering non-matching values. I would have hoped (and assumed, based on past use) that if the function returned false, the actual submit would not occur. And yet, it is. I'm testing by entering non-matching values in the password fields, and the alerts clearly show me the values and the not equal result, but the actual form action is still occurring and it's trying to write to my database. I'm pretty new at PHP; is there something about it that will not let me combine with javascript this way? Would it be better to use an input type=button and include submit() in the function itself if it returns true?

    Read the article

  • Wnat is the preferred method of building extremely lightweight business object / DAL now that I have

    - by Seth Spearman
    Hello, I have completed a simple database for a project. Only 6tables. Of the 6, one is a "lookup" table. There is one "master" table that is the driver for the system. It is referenced as a foreign key by the other four tables. Give that this step is completed. What is the FASTEST, EASIEST way to create POCOs/BizObjects that can load load the data and the child data. Here are my CAVEATS. *I don't want to spend more than 30-60 minutes learning how? *There is very little biz logic needed in the POCOs. They will pretty much load data. Don't even really need to write back data. *I already know CSLA (up to version 3) but I feel that is overkill for this little project. *Nevertheless, I would love it if it ROOT objects could have collection classes that contain the CHILD objects as in CSLA...but again, without using CSLA. *Please give the answer for .NET 35 but also if I was restricted to only use .NET 20. *Ideally I could just point a tool at the database and the POCOs would be genn'ed. *FREE Just curious what you guys use for this kind of scenario. I understand that this question is subjective but I want to hear a variety of answers. Seth

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • Applying fine-grained security to an existing application

    - by Mark
    I've inherited a reasonably large and complex ASP.NET MVC3 web application using EF Code First on SQL Server. It uses ASP.NET Membership roles with database authentication. The controller actions are secured with attributes derived from AuthorizeAttribute that map roles to actions. There are extension methods for the finer points, such as showing a particular widget to particular roles. This is works great and I have a good understanding of the current security model. I've been asked to provide finer grained security at the data level. For example a 'Customer' user can only see data (throughout the database) associated with themselves and not other Customers. The problem is that 'Customer' is only 1 of 5 different types with their own specific restrictions (each of the 9 roles is one of these 5 types). The best thing I can think of is to go through all the data repositories and extend each and every LINQ statements/query with a filter for every user type. Even if I had time for that it doesn't seem like the most elegant way. Any suggestions? I really don't know where to start with this so anything could be helpful. Many thanks.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >