Search Results

Search found 47996 results on 1920 pages for 'google apps script'.

Page 482/1920 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • Persistance JDO - How to query a property of a collection with JDOQL?

    - by Sergio del Amo
    I want to build an application where a user identified by an email address can have several application accounts. Each account can have one o more users. I am trying to use the JDO Storage capabilities with Google App Engine Java. Here is my attempt: @PersistenceCapable @Inheritance(strategy = InheritanceStrategy.NEW_TABLE) public class AppAccount { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String companyName; @Persistent List<Invoices> invoices = new ArrayList<Invoices>(); @Persistent List<AppUser> users = new ArrayList<AppUser>(); // Getter Setters and Other Fields } @PersistenceCapable @EmbeddedOnly public class AppUser { @Persistent private String username; @Persistent private String firstName; @Persistent private String lastName; // Getter Setters and Other Fields } When a user logs in, I want to check how many accounts does he belongs to. If he belongs to more than one he will be presented with a dashboard where he can click which account he wants to load. This is my code to retrieve a list of app accounts where he is registered. public static List<AppAccount> getUserAppAccounts(String username) { PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(AppAccount.class); q.setFilter("users.username == usernameParam"); q.declareParameters("String usernameParam"); return (List<AppAccount>) q.execute(username); } But I get the next error: SELECT FROM invoices.server.AppAccount WHERE users.username == usernameParam PARAMETERS String usernameParam: Encountered a variable expression that isn't part of a join. Maybe you're referencing a non-existent field of an embedded class. org.datanucleus.store.appengine.FatalNucleusUserException: SELECT FROM com.softamo.pelicamo.invoices.server.AppAccount WHERE users.username == usernameParam PARAMETERS String usernameParam: Encountered a variable expression that isn't part of a join. Maybe you're referencing a non-existent field of an embedded class. at org.datanucleus.store.appengine.query.DatastoreQuery.getJoinClassMetaData(DatastoreQuery.java:1154) at org.datanucleus.store.appengine.query.DatastoreQuery.addLeftPrimaryExpression(DatastoreQuery.java:1066) at org.datanucleus.store.appengine.query.DatastoreQuery.addExpression(DatastoreQuery.java:846) at org.datanucleus.store.appengine.query.DatastoreQuery.addFilters(DatastoreQuery.java:807) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:226) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:85) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.softamo.pelicamo.invoices.server.Store.getUserAppAccounts(Store.java:82) at com.softamo.pelicamo.invoices.test.server.StoreTest.testgetUserAppAccounts(StoreTest.java:39) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Any idea? I am getting JDO persistance totally wrong?

    Read the article

  • File Fix-it codegolf (GCJ 2010 1B-A)

    - by KirarinSnow
    Last year (2009), the Google Code Jam featured an interesting problem as the first problem in Round 1B: Decision Tree As the problem seemed tailored for Lisp-like languages, we spontaneously had an exciting codegolf here on SO, in which a few languages managed to solve the problem in fewer characters than any Lisp variety, using quite a number of different techniques. This year's Round 1B Problem A (File Fix-it) also seems tailored for a particular family of languages, Unix shell scripts. So continuing the "1B-A tradition" would be appropriate. :p But which language will end up with the shortest code? Let us codegolf and see! Problem description (adapted from official page): You are given T test cases. Each test case contains N lines that list the full path of all directories currently existing on your computer. For example: /home/awesome /home/awesome/wheeeeeee /home/awesome/wheeeeeee/codegolfrocks /home/thecakeisalie Next, you are given M lines that list the full path of directories you would like to create. They are in the same format as the previous examples. You can create a directory using the mkdir command, but you can only do so if the parent directory already exists. For example, to create the directories /pyonpyon/fumufumu/yeahyeah and /pyonpyon/fumufumu/yeahyeahyeah, you would need to use mkdir four times: mkdir /pyonpyon mkdir /pyonpyon/fumufumu mkdir /pyonpyon/fumufumu/yeahyeah mkdir /pyonpyon/fumufumu/yeahyeahyeah For each test case, return the number of times you have to call mkdir to create all the directories you would like to create. Input Input consists of a text file whose first line contains the integer T, the number of test cases. The rest of the file contains the test cases. Each test case begins with a line containing the integers N and M, separated by a space. The next N lines contain the path of each directory currently existing on your computer (not including the root directory /). This is a concatenation of one or more non-empty lowercase alphanumeric strings, each preceded by a single /. The following M lines contain the path of each directory you would like to create. Output For each case, print one line containing Case #X: Y, where X is the case number and Y is the solution. Limits 1 = T = 100. 0 = N = 100. 1 = M = 100. Each path contains at most 100 characters. Every path appears only once in the list of directories already on your computer, or in the list of desired directories. However, a path may appear on both lists, as in example case #3 below. If a directory is in the list of directories already on your computer, its parent directory will also be listed, with the exception of the root directory /. The input file is at most 100,000 bytes long. Example Larger sample test cases may be downloaded here. Input: 3 0 2 /home/sparkle/pyon /home/sparkle/cakes 1 3 /z /z/y /z/x /y/y 2 1 /moo /moo/wheeeee /moo Output: Case #1: 4 Case #2: 4 Case #3: 0 Code Golf Please post your shortest code in any language that solves this problem. Input and output may be handled via stdin and stdout or by other files of your choice. Please include a disclaimer if your code has the potential to modify or delete existing files when executed. Winner will be the shortest solution (by byte count) in a language with an implementation existing prior to the start of Round 1B 2010.

    Read the article

  • Android : Providing auto autosuggestion in android places Api?

    - by user1787493
    I am very new to android Google maps i write the following program for displaying the auto sugesstion in the android when i am type the text in the Autocomplete text box it is going the input to the url but the out put is not showing in the program .please see once and let me know where i am doing the mistake. package com.example.exampleplaces; import java.util.ArrayList; import org.json.JSONArray; import org.json.JSONObject; import org.json.JSONTokener; import android.app.Activity; import android.os.AsyncTask; import android.os.Bundle; import android.provider.SyncStateContract.Constants; import android.text.Editable; import android.text.TextWatcher; import android.util.Log; import android.view.View; import android.widget.AutoCompleteTextView; import android.widget.ProgressBar; public class Place extends Activity { private AutoCompleteTextView mAtv_DestinationLocaiton; public ArrayList<String> autocompletePlaceList; public boolean DestiClick2; private ProgressBar destinationProgBar; private static final String GOOGLE_PLACE_API_KEY = ""; private static final String GOOGLE_PLACE_AUTOCOMPLETE_URL = "https://maps.googleapis.com/maps/api/place/autocomplete/json?"; //https://maps.googleapis.com/maps/api/place/autocomplete/output?parameters @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); autocompletePlaceList = new ArrayList<String>(); destinationProgBar=(ProgressBar)findViewById(R.id.progressBar1); mAtv_DestinationLocaiton = (AutoCompleteTextView) findViewById(R.id.et_govia_destination_location); mAtv_DestinationLocaiton.addTextChangedListener(new TextWatcher() { public void onTextChanged(CharSequence s, int start, int before, int count) { Log.i("Count", "" + count); if (!mAtv_DestinationLocaiton.isPerformingCompletion()) { autocompletePlaceList.clear(); DestiClick2 = false; new loadDestinationDropList().execute(s.toString()); } } public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } public void afterTextChanged(Editable s) { // TODO Auto-generated method stub } }); } private class loadDestinationDropList extends AsyncTask<String, Void, ArrayList<String>> { @Override protected void onPreExecute() { // Showing progress dialog before sending http request destinationProgBar.setVisibility(View.INVISIBLE); } protected ArrayList<String> doInBackground(String... unused) { try { Thread.sleep(3000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } autocompletePlaceList = getAutocompletePlaces(mAtv_DestinationLocaiton.getText().toString()); return autocompletePlaceList; } public ArrayList<String> getAutocompletePlaces(String placeName) { String response2 = ""; ArrayList<String> autocompletPlaceList = new ArrayList<String>(); String url = GOOGLE_PLACE_AUTOCOMPLETE_URL + "input=" + placeName + "&sensor=false&key=" + GOOGLE_PLACE_API_KEY; Log.e("MyAutocompleteURL", "" + url); try { //response2 = httpCall.connectToGoogleServer(url); JSONObject jsonObj = (JSONObject) new JSONTokener(response2.trim() .toString()).nextValue(); JSONArray results = (JSONArray) jsonObj.getJSONArray("predictions"); for (int i = 0; i < results.length(); i++) { Log.e("RESULTS", "" + results.getJSONObject(i).getString("description")); autocompletPlaceList.add(results.getJSONObject(i).getString( "description")); } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return autocompletPlaceList; } } }

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Is there a way to tell Drupal not to cache a specific page?

    - by TechplexEngineer
    I have a custom php page that processes a feed of images and makes albums out of it. However whenever i add pictures to my feed, the Drupal page doesn't change until I clear the caches. Is there a way to tell Drupal not to cache that specific page? Thanks, Blake Edit: Drupal v6.15 Not exactly sure what you mean oswald, team2648.com/media is hte page. I used the php interpreter module. Here is the php code: <?php //////// CODE by Pikori Web Designs - pikori.org /////////// //////// Please do not remove this title, /////////// //////// feel free to modify or copy this software /////////// $feedURL = 'http://picasaweb.google.com/data/feed/base/user/Techplex.Engineer?alt=rss&kind=album&hl=en_US'; $photoNodeNum = 4; $galleryTitle = 'Breakaway Pictures'; $year = '2011'; ?> <?php /////////////// DO NOT EDIT ANYTHING BELOW THIS LINE ////////////////// $album = $_GET['album']; if($album != ""){ //GENERATE PICTURES $feedURL= "http://".$album."&kind=photo&hl=en_US"; $feedURL = str_replace("entry","feed",$feedURL); $sxml = simplexml_load_file($feedURL); $column = 0; $pix_count = count($sxml->channel->item); //print '<h2>'.$sxml->channel->title.'</h2>'; print '<table cellspacing="0" cellpadding="0" style="font-size:10pt" width="100%"><tr>'; for($i = 0; $i < $pix_count; $i++) { print '<td align="center">'; $entry = $sxml->channel->item[$i]; $picture_url = $entry->enclosure['url']; $time = $entry->pubDate; $time_ln = strlen($time)-14; $time = substr($time,0,$time_ln); $description = $entry->description; $tn_beg = strpos($description, "src="); $tn_end = strpos($description, "alt="); $tn_length = $tn_end - $tn_beg; $tn = substr($description, $tn_beg, $tn_length); $tn_small = str_replace("s288","s128",$tn); $picture_url = $tn; $picture_beg = strpos($picture_url,"http:"); $picture_len = strlen($picture_url)-7; $picture_url = substr($tn, $picture_beg, $picture_len); $picture_url = str_replace("s288","s640",$picture_url); print '<a rel="lightbox[group]" href="'.$picture_url.'">'; print '<img '.$tn_small.' style="border:1px solid #02293a"><br>'; print '</a></td> '; if($column == 4){ print '</tr><tr>'; $column = 0;} else $column++; } print '</table>'; print '<br><center><a href="media">Return to album</a></center>'; } else { //GENERATE ALBUMS $sxml = simplexml_load_file($feedURL); $column = 0; $album_count = count($sxml->channel->item); //print '<h2>'.$galleryTitle.'</h2>'; print '<table cellspacing="0" cellpadding="0" style="font-size:10pt" width="100%"><tr>'; for($i = 0; $i < $album_count; $i++) { $entry = $sxml->channel->item[$i]; $time = $entry->pubDate; $time_ln = strlen($time)-14; $time = substr($time,0,$time_ln); $description = $entry->description; $tn_beg = strpos($description, "src="); $tn_end = strpos($description, "alt="); $tn_length = $tn_end - $tn_beg; $tn = substr($description, $tn_beg, $tn_length); $albumrss = $entry->guid; $albumrsscount = strlen($albumrss) - 7; $albumrss = substr($albumrss, 7, $albumrsscount); $search = strstr($time, $year); if($search != FALSE || $year == ''){ print '<td valign="top">'; print '<a href="/node/'.$photoNodeNum.'?album='.$albumrss.'">'; print '<center><img '.$tn.' style="border:3px double #cccccc"><br>'; print $entry->title.'<br>'.$time.'</center>'; print '</a><br></td> '; if($column == 3){ print '</tr><tr>'; $column = 0; } else { $column++; } } } print '</table>'; } ?>

    Read the article

  • Java unit test coverage numbers do not match.

    - by Dan
    Below is a class I have written in a web application I am building using Java Google App Engine. I have written Unit Tests using TestNG and all the tests pass. I then run EclEmma in Eclipse to see the test coverage on my code. All the functions show 100% coverage but the file as a whole is showing about 27% coverage. Where is the 73% uncovered code coming from? Can anyone help me understand how EclEmma works and why I am getting the discrepancy in numbers? package com.skaxo.sports.models; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType= IdentityType.APPLICATION) public class Account { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private String userId; @Persistent private String firstName; @Persistent private String lastName; @Persistent private String email; @Persistent private boolean termsOfService; @Persistent private boolean systemEmails; public Account() {} public Account(String firstName, String lastName, String email) { super(); this.firstName = firstName; this.lastName = lastName; this.email = email; } public Account(String userId) { super(); this.userId = userId; } public void setId(Long id) { this.id = id; } public Long getId() { return id; } public String getUserId() { return userId; } public void setUserId(String userId) { this.userId = userId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public boolean acceptedTermsOfService() { return termsOfService; } public void setTermsOfService(boolean termsOfService) { this.termsOfService = termsOfService; } public boolean acceptedSystemEmails() { return systemEmails; } public void setSystemEmails(boolean systemEmails) { this.systemEmails = systemEmails; } } Below is the test code for the above class. package com.skaxo.sports.models; import static org.testng.Assert.assertEquals; import static org.testng.Assert.assertNotNull; import static org.testng.Assert.assertTrue; import static org.testng.Assert.assertFalse; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class AccountTest { @Test public void testId() { Account a = new Account(); a.setId(1L); assertEquals((Long) 1L, a.getId(), "ID"); a.setId(3L); assertNotNull(a.getId(), "The ID is set to null."); } @Test public void testUserId() { Account a = new Account(); a.setUserId("123456ABC"); assertEquals(a.getUserId(), "123456ABC", "User ID incorrect."); a = new Account("123456ABC"); assertEquals(a.getUserId(), "123456ABC", "User ID incorrect."); } @Test public void testFirstName() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getFirstName(), "Test", "User first name not equal to 'Test'."); a.setFirstName("John"); assertEquals(a.getFirstName(), "John", "User first name not equal to 'John'."); } @Test public void testLastName() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getLastName(), "User", "User last name not equal to 'User'."); a.setLastName("Doe"); assertEquals(a.getLastName(), "Doe", "User last name not equal to 'Doe'."); } @Test public void testEmail() { Account a = new Account("Test", "User", "[email protected]"); assertEquals(a.getEmail(), "[email protected]", "User email not equal to '[email protected]'."); a.setEmail("[email protected]"); assertEquals(a.getEmail(), "[email protected]", "User email not equal to '[email protected]'."); } @Test public void testAcceptedTermsOfService() { Account a = new Account(); a.setTermsOfService(true); assertTrue(a.acceptedTermsOfService(), "Accepted Terms of Service not true."); a.setTermsOfService(false); assertFalse(a.acceptedTermsOfService(), "Accepted Terms of Service not false."); } @Test public void testAcceptedSystemEmails() { Account a = new Account(); a.setSystemEmails(true); assertTrue(a.acceptedSystemEmails(), "System Emails is not true."); a.setSystemEmails(false); assertFalse(a.acceptedSystemEmails(), "System Emails is not false."); } }

    Read the article

  • How do I host node.js apps with pm2 without running them as root?

    - by jishi
    I have setup pm2 to run a node.js application, and I can successfully start it and it will resurrect upon reboot. However, the pm2 daemon is ran as root, which makes me think that all my node-scripts also runs as root? Even though I added them as a regular user in the system. The log files and stuff is created in the users home dir, /~/.pm2/logs, but the logs are owned by root. when I invoke pm2 startup (which handles the installation of the init.d script etc), it creates /etc/init.d/pm2-init.sh which looks like this: #!/bin/bash # chkconfig: 2345 98 02 # # description: PM2 next gen process manager for Node.js # processname: pm2 # ### BEGIN INIT INFO # Provides: pm2 # Required-Start: # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: PM2 init script # Description: PM2 is the next gen process manager for Node.js ### END INIT INFO NAME=pm2 PM2=/usr/local/lib/node_modules/pm2/bin/pm2 NODE=/usr/local/bin/node export HOME="/root" start() { echo "Starting $NAME" $NODE $PM2 stopAll $NODE $PM2 resurrect } stop() { $NODE $PM2 dump $NODE $PM2 stopAll } restart() { echo "Restarting $NAME" stop start } status() { echo "Status for $NAME:" $NODE $PM2 list RETVAL=$? } case "$1" in start) start ;; stop) stop ;; status) status ;; restart) restart ;; *) echo "Usage: {start|stop|status|restart}" exit 1 ;; esac exit $RETVAL When I dump the processes (which is what it will use when resurrecting the processes), I see mentions of user "USER":"pi" but I don't think that it's actually run as user pi. Any thoughts?

    Read the article

  • Apps management dashboard: what features should be in it?

    - by Christophe
    On a dashboard to manage business web apps (CRM, email marketing, collaboration, accounting...) from a single place which features should be a must have and nice to have? Those that come to mind are SSO, unified billing, users provisioning. What else? What should be available to the super user (admin) vs the business user? Do you know any products of this kind in the market today? Thanks Christophe GetApp.com

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • how can I save the layout of my open apps on osx?

    - by fad
    I'd like to save the position and size of my open apps and restore them later (after restart). I found an applescript that does this with finder windows only: http://hubionmac.com/wordpress/2008/09/finder-fenster-position-und-layout-speichern/ (german) Maybe I could adept it, but I can't believe there's no application for this.

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • Ubuntu: Copy terminal mouse selection automatically to clipboard for other apps?

    - by mfn
    I can use keyboard shortcuts and selecting text with the left mouse button allows me to insert the selection within gnome-terminal with the middle mouse button. However, mouse selection with left button does not copy it to the clipboard and thus is not available to other apps. I don't want to use the keyboard shortcuts when I select text with the left mouse button, I want to automatically have it in the clipboard (much like putty works on Windows).

    Read the article

  • Need help trying to allow my remote PowerShell script to run on my Windows 2008 r2 Server

    - by Pure.Krome
    I've got a Windows 2008 r2 server with Sql Server 2008 r2 installed. I've got a Sql Server Agent which tries to run a Powershell job, but fails :- Message Executed as user: FooServer\SqlServerUser. A job step received an error at line 1 in a PowerShell script. The corresponding line is '& '\\polanski\Backups\Database\7ZipFooDatabases.ps1'' Correct the script and reschedule the job. The error information returned by PowerShell is: 'File \\polanski\Backups\Database\7ZipFooDatabases.ps1 cannot be loaded. The file \\polanski\Backups\Database\7ZipFooDatabases.ps1 is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details.. '. Process Exit Code -1. The step failed. Ok. So i run Powershell on that server then set the execution policy to unrestricted. To check .. PS C:\Users\theUser> Get-ExecutionPolicy Unrestricted PS C:\Users\theUser> Kewl :) but it still doesn't work :( Ok ... what happens when i try to run the powershell from the command line.... PS C:\Users\justin.adler . '\polanski\Backups\Database\7ZipMotorshoutDatabases.ps1' Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run \\polanski\Backups\Database\7ZipFooDatabases.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): er..... didn't I already tell the server that ANY file can be ran? Notice the file is located at... \\polanski\Backups\Database\ So can someone make any suggestions?

    Read the article

  • Technically, How does uploading Apps to an Android Phone Work?

    - by unixman83
    Amazon.com has an Android Marketplace. How do the apps go from Amazon.com to my phone? I am looking for a protocol level analysis. Do they use a basic protocol like FTP and then check with a Google digital signature? I do not own an Android. I wish for an explanation of how the protocol operates because I want to provide an android app for free download for my users off my website, like Amazon does.

    Read the article

  • How to Prevent Spotlight from Indexing Non-Apps in /Applications Directory?

    - by Ross Charette
    The only thing I need indexed in the /Applications folder are the .app files. Is there any way to setup a filter to have mds or Spotlight ignore everything in /Applications except .apps? Otherwise, would it be possible to setup a rule for Alfred to omit any non-.app records from /Applications? I still want documents indexed and returned, just not from that specific directory. OS X 10.6.8 if you're wondering.

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Different font SIZES in a Text Editor, based on Script(Alphabet) type (ie. per Unicode Code-Block)

    - by fred.bear
    Some non-Latin-based scripts(alphabets) have more detail in their glyphs than do the Latin-based-script equivalents, and typically need a larger font to give the same degree of legibility (resolution-wise). Sometimes, both script types need to be present in the same file. Notepad++ allows different font SIZES (and colour, etc) courtesy of syntax-highlighting. This allows me to display larger-fonted non-Latin-based script in a // BIG-FONT comment. Although this has been quite handy for me in some situations, it is quite limited. A Word Processor can handle this scenario, but I'm not interested in that. I want a nice simple(?) plain(?) Text Editor to do it... on a per script-type basis... eg. mixing Latin-1 and Devanagari (and Mandarin, and ... Such a thing may not exits, but Notepad++ has shown that a simple(?) plain(?) Text Editor is capable of it. Does anyone know of such a Text Editor? ...Q. Why not a Word Processor? ...A. Because GCC and Python don't like that format! but UTF-8 is fine.

    Read the article

  • Applications on the Web/Cloud the way to go? over Desktop apps?

    - by jiewmeng
    i am currently a mainly web developer, but is quite attracted to the performance and great integration with the OS (eg. Windows 7, Jump Lists, Taskbar Thumbnails, etc) something like WPF/C# can provide to the user, improving workflow and productivity. privacy and performance seems like a major downside of web/cloud apps compared to desktop apps. applications on the cloud/web work on the go, increased popularity of smartphones/netbooks majority of users may not benefit as much from increased performance of desktop apps, eg. internet surfing, word processing, probably benefit more from decreased startup times, lower costs and data on the cloud desktop applications increased performance benefits power users like 3D rendering, HD video/photo editing, gamers (i wonder if such processing maybe offset to cloud processing) integration with OS increases productivity (maybe such features can be adapted to a web version? maybe with a local desktop app to work with Web App API) more control over privacy (maybe fixed by encryption?) local data access (esp. large files) guaranteed and fast (YouTube HD fast enough most of the time) work not affected by intermittent/slow/availability internet connections (i know this is changing tho) what do you think?

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >