Search Results

Search found 3691 results on 148 pages for 'perfect forwarding'.

Page 123/148 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • SQL Server Express performance issue

    - by Developer IT
    Hi folks ! I know my questions will sound silly and probably nobody will have perfect answer but since I am in a complete dead-end with the situation it will make me feel better to post it here. So... I have a SQL Server Express database that's 500 Mb. It contains 5 tables and maybe 30 stored procedure. This database is use to store articles and is use for the Developer It web site. Normally the web pages load quickly, let's say 2 ou 3 sec. BUT, sqlserver process uses 100% of the processor for those 2 or 3 sec. I try to find which stored procedure was the problem and I could not find one. It seems like every read into the table dans contains the articles (there are about 155,000 of them and 20 or so gets added every 15 minutes). I added few index but without luck... It is because the table is full text indexed ? Should I have order with the primary key instead of date ? I never had any problems with ordering by dates.... Should I use dynamic SQL ? Should I add the primary key into the url of the articles ? Should I use mutiple indexes for seperate columns or one big index ? I you want more details or code bits, just ask for it. Basicly, every little hint is much apreciated. Thanks.

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • how to place dropdown list box in jquery grid column

    - by kumar
    Hello friends, I have a jquery grid columns defined like this.. using System; using System.Collections.Generic; using System.Linq; using System.Web; using Trirand.Web.Mvc; using System.Web.UI.WebControls; namespace JQGridMVCExamples.Models { public class OrdersJqGridModel { public OrdersJqGridModel() { OrdersGrid = new JQGrid { Columns = new List<JQGridColumn>() { new JQGridColumn { DataField = "OrderID", Width = 50 }, new JQGridColumn { DataField = "OrderDate", Width = 100, DataFormatString = "{0:d}" }, new JQGridColumn { DataField = "CustomerID", Width = 100 }, new JQGridColumn { DataField = "Freight", Width = 75 }, new JQGridColumn { DataField = "ShipName" } }, Width = Unit.Pixel(640) }; OrdersGrid.ToolBarSettings.ShowRefreshButton = true; } public JQGrid OrdersGrid { get; set; } } } and in the view I am caling like this <div> <%= Html.Trirand().JQGrid(Model.OrdersGrid, "JQGrid1") %> </div> I am getting result perfect.. but for column Freight in the Jquery grid I need to place a dropdown list dynamically for all result rows.. can anyone help me out.. THanks

    Read the article

  • Execute Stored Procedure from Classic ASP

    - by Jaco Pretorius
    For some fantastic reason I find myself debugging a problem in a Classic ASP page (at least 10 years of my life lost in the last 2 days). I'm trying to execute a stored procedure which contains some OUT parameters. The problem is that one of the OUT parameters is not being populated when the stored procedure returns. I can execute the stored proc from SQL management studio (this is 2008) and all the values are being set and returned exactly as expected. declare @inVar1 varchar(255) declare @inVar2 varchar(255) declare @outVar1 varchar(255) declare @outVar2 varchar(255) SET @inVar2 = 'someValue' exec theStoredProc @inVar1 , @inVar2 , @outVar1 OUT, @outVar2 OUT print '@outVar1=' + @outVar1 print '@outVar2=' + @outVar2 Works great. Fantastic. Perfect. The exact values that I'm expecting are being returned and printed out. Right, since I'm trying to debug a Classic ASP page I copied the code into a VBScript file to try and narrow down the problem. Here is what I came up with: Set Conn = CreateObject("ADODB.Connection") Conn.Open "xxx" Set objCommandSec = CreateObject("ADODB.Command") objCommandSec.ActiveConnection = Conn objCommandSec.CommandType = 4 objCommandSec.CommandText = "theStoredProc " objCommandSec.Parameters.Refresh objCommandSec.Parameters(2) = "someValue" objCommandSec.Execute MsgBox(objCommandSec.Parameters(3)) Doesn't work. Not even a little bit. (Another ten years of my life down the drain) The third parameter is simply NULL - which is what I'm experiencing in the Classic ASP page as well. Could someone shed some light on this? Am I completely daft for thinking that the classic ASP code would be the same as the VBScript code? I think it's using the same scripting engine and syntax so I should be ok, but I'm not 100% sure. The result I'm seeing from my VBScript is the same as I'm seeing in ASP.

    Read the article

  • PHP Missing Function In Older Version

    - by Umair Ashraf
    My this PHP function converts a datetime string into more readable way to represent passed date and time. This is working perfect in PHP version 5.3.0 but on the server side it is PHP version 5.2.17 which lacks this function. Is there a way I can fix this efficiently? This is not only a function which needs this "diff" function but there are many more. public function ago($dt1) { $interval = date_create('now')->diff(date_create($dt1)); $suffix = ($interval->invert ? ' ago' : '-'); if ($v = $interval->y >= 1) return $this->pluralize($interval->y, 'year') . $suffix; if ($v = $interval->m >= 1) return $this->pluralize($interval->m, 'month') . $suffix; if ($v = $interval->d >= 1) return $this->pluralize($interval->d, 'day') . $suffix; if ($v = $interval->h >= 1) return $this->pluralize($interval->h, 'hour') . $suffix; if ($v = $interval->i >= 1) return $this->pluralize($interval->i, 'minute') . $suffix; return $this->pluralize($interval->s, 'second') . $suffix; }

    Read the article

  • Is there something like a Filestorage class to store files in?

    - by nebukadnezzar
    Is there something like a class that might be used to store Files and directories in, just like the way Zip files might be used? Since I haven't found any "real" class to write Zip files (real class as in real class), It would be nice to be able to store Files and Directories in a container-like file. A perfect API would probably look like this: int main() { ContainerFile cntf("myContainer.cnt", ContainerFile::CREATE); cntf.addFile("data/some-interesting-stuff.txt"); cntf.addDirectory("data/foo/"); cntf.addDirectory("data/bar/", ContainerFile::RECURSIVE); cntf.close(); } ... I hope you get the Idea. Important Requirements are: The Library must be crossplatform anything *GPL is not acceptable in this case (MIT and BSD License are) I already played with the thought of creating an Implentation based on SQLite (and its ability to store binary blobs). Unfortunately, it seems impossible to store Directory structures in a SQLite Database, which makes it pretty much useless in this case. Is it useless to hope for such a class library?

    Read the article

  • Modify MySQL INSERT statement to omit the insertion of certain rows

    - by dave
    I'm trying to expand a little on a statement that I received help with last week. As you can see, I'm setting up a temporary table and inserting rows of student data from a recently administered test for a few dozen schools. When the rows are inserted, they are sorted by the score (totpct_stu, high to low) and the row_number is added, with 1 representing the highest score, etc. I've learned that there were some problems at school #9999 in SMITH's class (every student made a perfect score and they were the only students in the district to do so). So, I do not want to import SMITH's class. As you can see, I DELETED SMITH's class, but this messed up the row numbering for the remainder of student at the school (e.g., high score row_number is now 20, not 1). How can I modify the INSERT statement so as to not insert this class? Thanks! DROP TEMPORARY TABLE IF EXISTS avgpct ; CREATE TEMPORARY TABLE avgpct_1 ( sch_code VARCHAR(3), schabbrev VARCHAR(75), teachername VARCHAR(75), totpct_stu DECIMAL(5,1), row_number SMALLINT, dummy VARCHAR(75) ); -- ---------------------------------------- INSERT INTO avgpct SELECT sch_code , schabbrev , teachername , totpct_stu , @num := IF( @GROUP = schabbrev, @num + 1, 1 ) AS row_number , @GROUP := schabbrev AS dummy FROM sci_rpt WHERE grade = '05' AND totpct_stu >= 1 -- has a valid score ORDER BY sch_code, totpct_stu DESC ; -- --------------------------------------- -- select * from avgpct ; -- --------------------------------------- DELETE FROM avgpct_1 WHERE sch_code = '9999' AND teachername = 'SMITH' ;

    Read the article

  • Minimal deployment of couchdb on windows

    - by MartinStettner
    Hi, I'd like to use couchdb for a client-only application on Windows (the document-oriented structure and the synchronization features would be perfect for me). There is a Windows installer package here, but the installer itself has about 45 MB, when installed it takes more than 100 MB on my HD. This is far to much for my (relatively small) application. I noticed that there are a lot of "src" directories in the couchdb/lib subdirs. I've been experimenting with removing some of them and it didn't seem to break the system. Now I'm wondering what would be the "minimal" set of files (preferably binary-only) that would be needed in order to run a local couchdb server. Are there already any efforts to create such a deployment-friendly installer? Or could anyone give some (even very general) hints how to create it? How much disk space would be minimally required for such an installation? Needless to say that I'm not at all familiar with neither the couchdb internals nor the Erlang system :). But perhaps I could figure out if I got some direction (or I could stop trying if someone told me that this would be impossible or didn't make sense at all ...) Thanks anyway!

    Read the article

  • Getting value from pointer

    - by Eric
    Hi, I'm having problem getting the value from a pointer. I have the following code in C++: void* Nodo::readArray(VarHash& var, string varName, int posicion, float& d) { //some code before... void* res; float num = bit.getFloatFromArray(arregloTemp); //THIS FUNCTION RETURN A FLOAT AND IT'S OK cout << "NUMBER " << num << endl; d = num; res = &num; return res } int main() { float d = 0.0; void* res = n.readArray(v, "c", 0, d); //THE VALUES OF THE ARRAY ARE: {65.5, 66.5}; float* car3 = (float*)res; cout << "RESULT_READARRAY " << *car3 << endl; cout << "FLOAT REFERENCE: " << d << endl; } The result of running this code is the following: NUMBER 65.5 RESULT_READARRAY -1.2001 //INCORRECT IT SHOULD BE LIKE NUMBER FLOAT REFERENCE: 65.5 //CORRECT NUMBER 66.5 RESULT_READARRAY -1.2001 //INCORRECT IT SHOULD BE LIKE NUMBER FLOAT REFERENCE: 66.5 //CORRECT For some reason, when I get the value of the pointer returned by the function called readArray is incorrect. I'm passing a float variable(d) as a reference in the same function just to verify that the value is ok, and as you can see, THE FLOAT REFERENCE matches the NUMBER. If I declare the variable num(read array) as a static float, the first RESULT_READARRAY will be 65.5, that is correct, however, the next value will be the same instead of 66.5. Let me show you the result of running the code using static float variable: NUMBER 65.5 RESULT_READARRAY 65.5 //PERFECT FLOAT REFERENCE: 65.5 //¨PERFECT NUMBER 65.5 //THIS IS INCORRECT, IT SHOULD BE 66.5 RESULT_READARRAY 65.5 FLOAT REFERENCE: 65.5 Do you know how can I get the correct value returned by the function called readArray()?

    Read the article

  • Windows Server TCP Client application in c# Stops working after a while

    - by user1692494
    I am developping an application in C# Net framework 2.0. It is basicly a service application with a tcp client class. Class for Tcp Client Part public TelnetConnection(string Hostname, int Port) { host = Hostname; prt = Port; try { tcpSocket = new TcpClient(Hostname, Port); } catch (Exception e) { //Console.WriteLine(e.Message); } } The application is connecting to the server and revieves update's and more information. It works for about 20 minutes then stops recieving information from the server. -- Server side Client is still connected and the client side its still connected but stops receiving information from the server. I've been searching Stack Overflow and Google but no luck. try { if (!tcpSocket.Connected) return null; StringBuilder sb = new StringBuilder(); do { Parse(sb); System.Threading.Thread.Sleep(TimeOutMs); } while (tcpSocket.Available > 0); return sb.ToString(); } Application works perfect when it runs as console application but when running as service. It just stops.

    Read the article

  • Linking Mapkit with Core Data, Search and user location. Converting annotations from a database in a tableview with search to display in a mapview?

    - by Jon
    Xcode is quite new to me so explanations are appreciated. I am looking to build an application that displays annotations in a mapview (zoomed in on current user location). I want the applications to come from some sort of database rather than manually inputting all the annotations (which is what I'm currently doing) What would be my application type? tab based? window based? i want a tab for a tableview with a list of my annotations and a mapview tab that will show my database of annotations but with the map zoomed in on current location. In a perfect world, it would be great if the user could add favourites from these annotations and keep them in a favourites tableview tab. I'm desperate to work this out and create a fully functional app for a final uni project. i have a working application already but it's nothing like what i am trying to achieve, any help would be much appreciated!!!! Jon (if looked through countless tutorials and as of yet found nothing i can understand to achieve a project like this. Some would call me too ambitious, I just want to make a decent app)

    Read the article

  • what to do when you are completely burnt out from working on a project ?

    - by dfafa
    so i started this SaaS project around July 2009, was expecting to finish it in 2 months. i ended up working on it for about 4 months straight, spending about 6~12 hours nearly everyday. then one day i just couldn't bare to look at the code. it seems like my efforts are being sucked in by some black hole. i would need to put lot of work to make incremental changes. i felt burnt out in December....and now it's May. i am working on the project maybe for about 10 hours every 2 weeks...i am not getting much done. it seems like it will never be perfect. the more i code, more problems and bugs to fix, its absolutely sickening. so what should i do now ? i have invested all of my available time and money, i've shut off all social connections and basically have been spending most of my time in my room working on my project alone. i feel consumed by this project i created ironically, to make my life easier.

    Read the article

  • How can I speed up the "finally get it" process?

    - by Earlz
    Hello, I am a hobby programmer and began when I was about 13. I'm currently going to college(freshman) for my computer science degree(which means, I'm still in the stuff I already know such as for loops). I've been programming professionally for a start up for about 9 months or so now. I have a serious problem though. I think that almost all of the code I write is perfect. Now I remember reading an article somewhere where there is like 3 stages of learning programming You don't know anything and you know you don't know anything. You don't know anything but you think you do. You finally get and accept that you don't know anything. (if someone finds that article tell me and I'll give a link) So right now, I'm at stage 2. How can I get to stage 3 quicker? The more and more of some people's code I read I think "this is complete rubbish, I would've done it like..." and I really dislike how I think that way. (and this fairly recently began happening, like over the past year)

    Read the article

  • Ruby on Rails bizarre behavior with ActiveRecord error handling

    - by randombits
    Can anyone explain why this happens? mybox:$ ruby script/console Loading development environment (Rails 2.3.5) >> foo = Foo.new => #<Foo id: nil, customer_id: nil, created_at: nil, updated_at: nil> >> bar = Bar.new => #<Bar id: nil, bundle_id: nil, alias: nil, real: nil, active: true, list_type: 0, body_record_active: false, created_at: nil, updated_at: nil> >> bar.save => false >> bar.errors.each_full { |msg| puts msg } Real can't be blank Real You must supply a valid email => ["Real can't be blank", "Real You must supply a valid email"] So far that is perfect, that is what i want the error message to read. Now for more: >> foo.bars << bar => [#<Bar id: nil, bundle_id: nil, alias: nil, real: nil, active: true, list_type: 0, body_record_active: false, created_at: nil, updated_at: nil>] >> foo.save => false >> foo.errors.to_xml => "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<errors>\n <error>Bars is invalid</error>\n</errors>\n" That is what I can't figure out. Why am I getting Bars is invalid versus the error messages displayed above, ["Real can't be blank", "Real you must supply a valid email"] etc. My controller simply has a respond_to method with the following in it: format.xml { render :xml => @foo.errors, :status => :unprocessable_entity } How do I have this output the real error messages so the user has some insight into what they did wrong?

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Graphical glitches when adding cells and scrolling with UITableView

    - by Daniel I-S
    I am using a UITableView to display the results of a series of calculations. When the user hits 'calculate', I wish to add the latest result to the screen. This is done by adding a new cell to a 'results' section. The UITableViewCell object is added to an array, and then I use the following code to add this new row to what is displayed on the screen: [thisView beginUpdates]; [thisView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation: UITableViewRowAnimationFade]; [thisView endUpdates]; This results in the new cell being displayed. However, I then want to immediately scroll the screen down so that the new cell is the lowermost cell on-screen. I use the following code: [thisView scrollToRowAtIndexPath:newIndexPath atScrollPosition:UITableViewScrollPositionBottom animated:YES]; This almost works great. However, the first time a cell is added and scrolled to, it appears onscreen only briefly before vanishing. The view scrolls down to the correct place, but the cell is not there. Scrolling the view by hand until this invisible new cell's position is offscreen, then back again, causes the cell to appear - after which it behaves normally. This only happens the first time a cell is added; subsequent cells don't have this problem. It also happens regardless of the combination of scrollToRowAtIndexPath and insertRowsAtIndexPath animation settings. There is also a problem where, if new cells are added repeatedly and quickly, the new cells stop 'connecting up'. The lowermost cell in a group is supposed to have rounded corners, and when a new cell is added these turn into square corners so that there is a clean join with the next cell in the group. In this case, however, a cell often does not lose its rounded edges despite not being the last cell anymore. This also gets corrected once the affected area moves offscreen and back. This method of adding and scrolling would be perfect for my application if it weren't for these weird glitches. Any ideas as to what I may be doing wrong?

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • How can I obtain the local TCP port and IP Address of my client program?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain the local IP address and dynamically assigned TCP port of the client. The function I've found to do this is getsockname()... //setup the socket if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("client: socket"); continue; } //Retrieve the locally-bound name of the specified socket and store it in the sockaddr structure sa_len = sizeof(sa); getsock_check = getsockname(sockfd,(struct sockaddr *)&sa,(socklen_t *)&sa_len) ; if (getsock_check== -1) { perror("getsockname"); exit(1); } printf("Local IP address is: %s\n", inet_ntoa(sa.sin_addr)); printf("Local port is: %d\n", (int) ntohs(sa.sin_port)); but the output is always zero... Local IP address is: 0.0.0.0 Local port is: 0 does anyone see anything I might be or am definitely doing wrong? Thanks so much in advance for all your help!

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • Glob() filesearch, question

    - by Peter
    Hi, a little question. I have this code, which is works perfect for files, but If am trying search on a directory name, the result is blank. How I can fix that? <?php function listdirs($dir,$search) { static $alldirs = array(); $dirs = glob($dir."*"); foreach ($dirs as $d){ if(is_file($d)){ $filename = pathinfo($d); if(eregi($search,$filename['filename'])){ print "<a href=http://someurl.com/" . $d .">". $d . "</a><br/>"; } }else{ listdirs($d."/",$search); } } } $path = "somedir/"; $search= "test"; listdirs($path,$search); ?> somedir/test/ result: blank (I want: /somedir/test/) somedir/test/test.txt result: OK I want to search also in the directory names, how I can do that?

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • Eclipse PDT "tips" ?

    - by Pascal MARTIN
    Hi ! (Yes, this is a quite opened and general and subjective question -- it's by design, cause I want tips you think are great !) I'm using Eclipse PDT 2.1 to work in PHP, either for small and/or big projects -- I've been doing so for quite some times, now, actually (since before 1.0 stable, if I remember well)... I was wondering if any of you did know "tips" to be more efficient. Let met explain more in details : I know about things like plugins like Aptana (better editor for JS/CSS), Subversive (for SVN access), RSE, Filesync, integrating Xdebug's debugger, ... What I mean by "tips" is more some little things you discovered one day and since use all the time -- and allow you to be more efficient in your PHP projects. Some examples of "tips" that come to my mind, and that already know and use : ctrl+space to open the list of suggestions for functions / variables names ctrl+shift+R (navigate > open resource) to open a popup which show only files which names contain what you type ; ie, quick opening of files this one might be the perfect example : I know this one is not often known by coworkers and they find it as useful as I do ; so, I guess there might be lots of other things like this one I don't know myself ^^ ctrl+M to switch to full-screen view for the editor (instead of double-click on tabs bar) shift+F2 while on a function name, to open it's page if the PHP manual in a browser Attention Mac Users use Command instead Control. I guess you get the point ; but I'm really open to any suggestion (be it eclipse-related in general, of more PHP/PDT-specific) that can help be be more efficient :-) Anyway, thanks in advance for your help !

    Read the article

  • Database schemas WAY out of sync - need to get up to date without losing data

    - by Zind
    The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem: What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added. Else, if there are other known solutions, I'd like to hear them as well.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >