Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 581/666 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Existing Calendar Software for Web Service?

    - by OverClocked
    I'd like to offer users of my web platform calendars, and also offer group calendar and public calendar features. I know of Google Calendar API, of course, but that would require all my users to have Google accounts (although, I guess the question is who does not these days). I am wondering if there are any other alternatives? I can write my own, but this has gotten to have been done thousands of times already! I am looking not so much at the front end; we'd probably going to re-skin the UI anyways. But I want a solution that has good backend support for storing events and recurring events, and maintaining multiple calendars. Ideally, the engine would also support things like shared event between different calendars, syncing with other calendars, etc. Sad thing is that this does seem like Google Calendar would cover it, but asking all the users to have to have google accounts may not be realistic. Especially since we'd also want to create group calendars, and I am not sure under who's google account that'd be, and if that'd even be covered under their terms of use. Your thoughts? Thanks.

    Read the article

  • MVC : Checkboxes generated using JavaScript not appearing in FormCollection on postback

    - by Andy Evans
    I took over another project (written by one contractor, modified by another and now it's not working) written using MVC/C# where a view that has a table (see below) is dynamically populated using JSON/Javascript - the first column of which is a checkbox. View (spark view engine) <table id='component_list' name='component_list' cellpadding='0' border='0' cellspacing='0'> <thead> <tr> <th>&nbsp;</th> <th>Component</th> <th>Component Type</th> <th>Evenflo Part #</th> <th>Supplier Part #</th> <th>Supplier</th> <th>Requirement</th> <th>Location</th> <th>Region</th> </tr> </thead> <tbody> </tbody> </table> When the page is rendered, I look at the source for the page and do not see the table data (I wouldn't expect to see this). However, when the form is posted back, controller, the FormCollection is empty. Supposedly this had been working before the last contractor got their hands on it - which is another post all together. My goal right now is having the checkboxes in the FormCollection. Any suggestions would be greatly appreciated. Thanks,

    Read the article

  • How to control the system volume using javascript

    - by Geetha
    Hi, I am using media player to play audio and video. I am creating own button to increase and decrease the volume of the media player. working fine too. Problem: Even after reaches 0% volume its audible. If the player volume increase the system volume also be increased. Is it possible. How to achieve this task. Control: <object id="mediaPlayer" classid="clsid:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" height="1" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject" width="1"> <param name="fileName" value="" /> <param name="animationatStart" value="true" /> <param name="transparentatStart" value="true" /> <param name="autoStart" value="true" /> <param name="showControls" value="true" /> <param name="volume" value="70" /> </object> Code: function decAudio() { if (document.mediaPlayer.Volume >= -1000) { var newVolume = document.mediaPlayer.Volume - 100; if (newVolume >= -1000) { document.mediaPlayer.Volume = document.mediaPlayer.Volume - 100; } else { document.mediaPlayer.Volume = -1000; } } }

    Read the article

  • Is it possible to unit test methods that rely on NHibernate Detached Criteria?

    - by Aim Kai
    I have tried to use Moq to unit test a method on a repository that uses the DetachedCriteria class. But I come up against a problem whereby I cannot actually mock the internal Criteria object that is built inside. Is there any way to mock detached criteria? Test Method [Test] [Category("UnitTest")] public void FindByNameSuccessTest() { //Mock hibernate here var sessionMock = new Mock<ISession>(); var sessionManager = new Mock<ISessionManager>(); var queryMock = new Mock<IQuery>(); var criteria = new Mock<ICriteria>(); var sessionIMock = new Mock<NHibernate.Engine.ISessionImplementor>(); var expectedRestriction = new Restriction {Id = 1, Name="Test"}; //Set up expected returns sessionManager.Setup(m => m.OpenSession()).Returns(sessionMock.Object); sessionMock.Setup(x => x.GetSessionImplementation()).Returns(sessionIMock.Object); queryMock.Setup(x => x.UniqueResult<SopRestriction>()).Returns(expectedRestriction); criteria.Setup(x => x.UniqueResult()).Returns(expectedRestriction); //Build repository var rep = new TestRepository(sessionManager.Object); //Call repostitory here to get list var returnR = rep.FindByName("Test"); Assert.That(returnR.Id == expectedRestriction.Id); } Repository Class public class TestRepository { protected readonly ISessionManager SessionManager; public virtual ISession Session { get { return SessionManager.OpenSession(); } } public TestRepository(ISessionManager sessionManager) { } public SopRestriction FindByName(string name) { var criteria = DetachedCriteria.For<Restriction>().Add<Restriction>(x => x.Name == name) return criteria.GetExecutableCriteria(Session).UniqueResult<T>(); } } Note I am using "NHibernate.LambdaExtensions" and "Castle.Facilities.NHibernateIntegration" here as well. Any help would be gratefully appreciated.

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • Serialize Forms into memory

    - by serhio
    Background: In a desktop MDI application we have a lot of forms. Task: Save the controls contents of closed forms(textbox texts, checkbox checks etc). Limitations: A saved form for (DB/Windows) user? For (DB/Windows) group of users? Both variants may be possible. Question: a) What is the best way? b) if I want do not use files, how to serialize the form into a MemoryStream and then recuperate it if the opening form was been once opened and serialized? StartingPoint: Implemented a form that implements ISerializable. Deserialize the form on opening, serialize onclosing: Public Sub GetObjectData(ByVal info As System.Runtime.Serialization.SerializationInfo, ByVal context As System.Runtime.Serialization.StreamingContext) Implements System.Runtime.Serialization.ISerializable.GetObjectData info.AddValue("tbxExportFolder", tbxExportFolder.Text, GetType(String)) info.AddValue("cbxCheckAliasUnicity", cbxCheckAliasUnicity.Checked, GetType(Boolean)) End Sub Public Sub New(ByVal info As SerializationInfo, ByVal context As StreamingContext) Me.New() Me.tbxExportFolder.Text = info.GetString("tbxExportFolder") Me.cbxCheckAliasUnicity.Checked = info.GetBoolean("cbxCheckAliasUnicity") End Sub Private Sub SerializeMe() Dim binFormatter As New Formatters.Binary.BinaryFormatter Dim fileStream As New FileStream(SerializedFilename, FileMode.Create) Try binFormatter.Serialize(fileStream, Me) Catch Throw Finally fileStream.Close() End Try End Sub Protected Overrides Sub OnClosing(ByVal e As System.ComponentModel.CancelEventArgs) SerializeMe() MyBase.OnClosing(e) End Sub

    Read the article

  • Get Function Pointer to function in a shared library I didn't directly load

    - by bdk
    My Linux application (A) links against a Third Party shared Library (B) which I don't have source code to. This library makes use of another third party shared library that I don't have source code to (C). I believe that (B) uses dlopen to access (C) instead of directly linking. My reasoning for this is that 'ldd' on (B) does not show (C) and objdump -X (B) shows references to dlopen/dlclose/dlsym. My requirement is that I need to in my code for (A) get a function pointer to a function foo() located in (C). Normally I'd use dlsym for this, but I need to pass it the handle returned from dlopen which I don't have since (B) does not expose this. - For the larger context: I need to modify the function in (C) such that everytime it calls its helper function bar() (also located in (C)), it also calls a function with the same signature located in (A) with the same parameters (Basically inject my code into the codepath of (C) foo()-bar(). I believe I've found a way to accomplish this using gdb, but in order to port my gdb command list, but I'm stuck on the step of getting the function pointer. I'm also open to alternatives to accomplish the same task rather than the exact problem as stated above Edit: After writing this I realized I can probably just do another dlopen on the file in my code and the symbols returned via dlsym on that handle should be the same as received via the original dlopen, If I'm reading the dlopen man page correctly. However I'm still interested in advice or assistance with the my larger context, If theres a better way to go about this

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • Adding Mysql Columns in Rails Rake

    - by Gigg
    I have a rake file that does a series of calculations on a database. Basically it adds usage on equipment regularly. At the end of the day it needs to add that days total to a monthly total table and update that same table. i use the following simple concepts: To get data from the database is pretty simple: @usages = Usage.find(:all) time = Time.new for usage in @usages sql = ActiveRecord::Base.connection To insert it into the database: sql.execute "INSERT (or UPDATE) into usages ## add values and options as per MySQL But how do I: take a column from a database, add all of its values together that have a common value in another column, (i.e. if column x == value y) and then insert it into another column in another table, say dailyusages? I have tried these options: task (:monthly => :environment) do @dailyusages = Dailyusage.find(:all) for dailyusage in @dailyusages sql = ActiveRecord::Base.connection time = Time.new device = monthlyusages.device month = time.month if device == dailyusages.device ##&& month == dailyusages.month total = (dailyusage.total.sum.to_i) @monthlyusages = Monthlyusage.find(:all) for monthlyusage in @monthlyusages sql = ActiveRecord::Base.connection old_total = monthlyusage.total.to_i new_total = (old_total + total) sql.execute "UPDATE monthlyusages ( year, month, total, device ) values('#{time.year}', '#{time.month}', '#{total}', '#{dailyusage.device}' )" end I obviously have uncommented options and tried all sorts of things. Any help would really save me a load of trouble. Thanks in advance. (** BTW - I am new to rails, so go easy on me **)

    Read the article

  • Is There a More Efficient Way to Write Nested While Loops?

    - by Ryan
    The code below shows nested while loops, but it's not the very efficient. Suppose I wanted to extend the code to include 100 nested while loops. Is there a better way to accomplish this task? <?php $count = 1; $num = 1; $options=3; while ( $num <=$options ) { echo "(".$num . ") "; $num1 = 1; $options1=3; while ( $num1 <=$options1 ) { echo "*".$num1 . "* "; $num2 = 1; $options2=3; while ( $num2 <=$options2 ) { echo "@".$num2 . "@ "; $num3 = 1; $options3=3; while ( $num3 <=$options3 ) { echo $num3 . " "; $num3++; $count++; } echo "<br />"; $num2++; } echo "<br />"; $num1++; } echo "<br />"; $num++; } echo $count; ?>

    Read the article

  • Working with Decimal fields in SSIS

    - by CoffeeAddict
    I'm using SQL Server 2008 w/SP2. I've got an incoming decimal(9,2) field incoming through my OLE DB transformation to my recordset destination transformation. It's like it's reading it as something other than a decimal? I don't know..I'm not an SSIS guru. So continuing on...the problem I have starts here with me trying to stuff the value into a variable for this decimal field. In a foreach loop, I have a variable to represent this decimal field so I can work with it. The first problem that I believe is pretty well known is SSIS variables do not have a decimal type. And from my own testing and what I've read out there, people are using type object for the variable to make SSIS "happy" with decimal values? It makes mine happy. But, then in my foreach loop, I have a for loop. And inside that I'm using an E*xecute SQL Task transformation*. In it, I need to create a parameter mapping to my variable so I can work with that decimal field in my T-SQL call in here. So now I see a type decimal for the parameter and use it and set that to point to my variable. When I run SSIS and it hits my SQL call, I get this in my output window.: The type is not supported.DBTYPE_DECIMAL So I am hitting a wall here. All I wanna do is work with a decimal!!!

    Read the article

  • Why i cant save a long text on my MySQL database?

    - by DomingoSL
    im trying to save to my data base a long text (about 2500 chars) input by my users using a web form and passed to the server using php. When i look in phpmyadmin, the text gets crop. How can i config my table in order to get the complete text? This is my table config: CREATE TABLE `extra_879` ( `id` bigint(20) NOT NULL auto_increment, `id_user` bigint(20) NOT NULL, `title` varchar(300) NOT NULL, `content` varchar(3000) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id_user` (`id_user`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; Take a look of the field content that have a limit of 3000 chars, but the texts always gets crop at 690 chars. Thanks for any help! EDIT: I found the problem but i dont know how to solve it. The query is getting crop always in the same char, an special char: ù EDIT 2: This is the cropped query: INSERT INTO extra_879 (id,id_user,title,content) VALUES (NULL,'1','Informazione Extra',' Riconoscimenti Laurea di ingegneria presa a le 22 anni e in il terso posto della promozione Diploma analista di sistemi ottenuto il rating massimo 20/20, primo posto della promozione. Borsa di Studio (offerta dal Ministero Esteri Italiano) vinta nel 2010 (Valutazione del territorio attraverso le nueve tecnologie) Pubblicazione di paper; Stima del RCS della nave CCGS radar sulla base dei risultati di H. Leong e H. Wilson. http://www.ing.uc.edu.vek-azozayalarchivospdf/PAPER-Sarmiento.pdf Tesi di laurea: PROGETTAZIONE E REALIZZAZIONE DI UN SIS-TEMA DI TELEMETRIA GSM PER IL CONTROLLO DELLO STATO DI TRANSITO VEICOLARE E CLIMA (ottenuto il punteggio pi') It gets crop just when the (ottenuto il punteggio più alto) phrase, just when ù appear... EDIT 3: I using jquery + ajax to send the query $.ajax({type: "POST", url: "handler.php", data: "e_text="+ $('#e_text').val() + "&e_title="+ $('#extra_title').val(),

    Read the article

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • Existing parsers in c# (BSD license or similar)

    - by Sylverdrag
    I am looking for parsers (in C#) for a bunch of formats. (PHP, ASP, some XML based formats, HTML,...pretty much anything I can get my hands on.) The purpose is to separate the text from the code and do some edits without messing up the code. I had a look at ANTLR, but while it seems like the "right tool", there is just too much prior knowledge assumed. I have an easier time writing a parser from scratch than understanding how to "easily" generate parsers from ANTLR. (I wrote a small parser for a specific type of RTF files within a couple days, so the task is probably within my reach, but as I have no formal knowledge of parsing/lexing, I am at loss with ANTLR) Then it occurred to me that there must existing parsers for many formats, so before I start writing yet another a brand new and potentially buggy version of the wheel, I figured I would check what parsers already exist and can be reused in a commercial product. I could use parsers for just about every format in existence, so this question would be a good place to make a list of all existing free parsers written in C#, if there are any. Thanks in advance for your suggestions

    Read the article

  • Problems running XNA game on 64-bit Windows 7

    - by Tesserex
    I'm having problems getting my game engine to run on my brother's machine, which is running 64-bit Windows 7. I'm developing on 32-bit XP SP2. My app uses XNA, FMOD.NET, and another dll I wrote entirely in C#. Everything is targeted to x86, not AnyCPU. I've read that this is required for XNA to work because there is no 64-bit xna framework. I recompiled FMOD.NET as x86 as well and made sure to be using the 32-bit version of the native dll. So I don't see any problems there. However when he tries to run my app, it gives an error which is mysterious, but not unheard of. A FileNotFoundException with an empty file name, and the top of the stack trace is in my main form constructor. The message is The specified module could not be found. (Exception from HRESULT: 0x8007007E) I found some threads online about this error, all with very vague, mixed, and fuzzy responses that don't really help me. Most remind people to target x86. Some say check that they have all the dlls necessary. I gave my brother Microsoft.Xna.Framework.dll, but does he need to install the entire XNA redistributable package? When I take everything I sent him and stick it in a random directory, it still runs fine for me. I developed the game in VS2008, not in game studio, using XNA 3.0 and a Windows Forms control that uses XNA drawing which I found in an msdn tutorial. I would also like to avoid requiring a full installer if possible. Any insight? Please?

    Read the article

  • jquery google visual api graph's data rows does not work.

    - by marharépa
    Hi! I'd like to use google drawVisualization API. Example: var data = new google.visualization.DataTable(); data.addColumn('string'); data.addColumn('number'); data.addRows([ ['a', 14], ['b', 47], ['c', 80], ['d', 55], ['e', 16], ['f', 90], ['g', 29], ['h', 23], ['i', 58], ['j', 48] ]); My version gets elements by an other google api, and join them, and after place the variable between ([ and ]), to be like in the example. var outputGraph = []; for (var i = 0, entry; entry = entries[i]; ++i) { var asd = [ entry.getValueOf('ga:pageTitle'), entry.getValueOf('ga:pageviews') ].join("',"); outputGraph.push(" ['" + asd + "]"); //get the 2 elements and join them to be like ['asd', 2], } // this is fine, the outputgraph is like ['asd', 2], ['asd', 2], ['asd', 2] as seen in the example var outputGraphFine = ("(["+outputGraph+"])"); // i suggest this is which fails the script. var data = new google.visualization.DataTable(); data.addColumn('string', 'Task'); data.addColumn('number', 'Hours per Day'); data.addRows = outputGraphFine; But it doesn't work. Why?

    Read the article

  • Opening port 80 with Java application on Ubuntu

    - by Featheast
    What I need to do is running a Java application which is a RESTful service server side writtern by Restlet. And this service will be called by another app running on Google App Engine. Because of the restriction of GAE, every http call is limited to port 80 and 443 (http and https) with HttpUrlConnection class. As a result, I have to deploy my server side application on port 80 or 443. However, because the app is running on Ubuntu, and those ports under 1024 cannot be accessed by non-root user, then a Access Denied exception will be thrown when I run my app. The solutions that have come into my mind includs: Changing the security policy of JRE, which is the files resides in /lib/security/java.policy, to grantjava.net.SocketPermission "*.80" "listen, connect, accept, resolve" permission?However, neither using command line to include this file or overrides the content in JRE's java.policy file, the same exception keeps coming out. try to login as a root user, however because my unfamiliarity with Unix, I don't know how to do it. another solution I haven't try is to map all calls to 80 to a higher port like 1234, then I can deploy my app on 1234 without problem, and GAE call send request to port 80. But how to connect the missing gap is still a problem. Currently I am using a "hacking" method, which is to package the application into a jar file, and sudo running the jar file with root privilege. It works now, but definitely not appropriate in the real deployment environment. So if anyone have any idea about the solution, thanks very much!

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Why does this extension method throw a NullReferenceException in VB.NET?

    - by Dan
    From previous experience I had been under the impression that it's perfectly legal (though perhaps not advisable) to call extension methods on a null instance. So in C#, this code compiles and runs: // code in static class static bool IsNull(this object obj) { return obj == null; } // code elsewhere object x = null; bool exists = !x.IsNull(); However, I was just putting together a little suite of example code for the other members of my development team (we just upgraded to .NET 3.5 and I've been assigned the task of getting the team up to speed on some of the new features available to us), and I wrote what I thought was the VB.NET equivalent of the above code, only to discover that it actually throws a NullReferenceException. The code I wrote was this: ' code in module ' <Extension()> _ Function IsNull(ByVal obj As Object) As Boolean Return obj Is Nothing End Function ' code elsewhere ' Dim exampleObject As Object = Nothing Dim exists As Boolean = Not exampleObject.IsNull() The debugger stops right there, as if I'd called an instance method. Am I doing something wrong (e.g., is there some subtle difference in the way I defined the extension method between C# and VB.NET)? Is it actually not legal to call an extension method on a null instance in VB.NET, though it's legal in C#? (I would have thought this was a .NET thing as opposed to a language-specific thing, but perhaps I was wrong.) Can anybody explain this one to me?

    Read the article

  • php when to use get method?

    - by user329394
    how all, when is the right time to use $_GET['data']? i want to pass value of userid from page A to page B by using popup javascript. $qry="SELECT * FROM dbase WHERE id='".$id."'"; $sql=mysql_query($qry); $rs=mysql_fetch_array($sql); <script language="JavaScript"> function myPopup() { window.open( "<?=$CFG->wwwroot.'/ptk/main.php?task=ptk_checkapp&id='.$rs['userid'];?>" ,"myWindow", "status = 1, height = 500, width = 500, scrollbars=yes,toolbar=no,directories=no,location=no,menubar=no, resizable='yes';" ) } </script> calling by hyperlink: <a href="#" onclick="myPopup()"> <?=ucwords(strtolower($rs->nama));?> </a> It seems that , the $rs['user'] dont hold any value on it. can tell me what problem or may be solution? thank you very much.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >