Search Results

Search found 60846 results on 2434 pages for 'spring data mongodb'.

Page 160/2434 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Willy Rotstein on Supply Chain Planning

    - by sarah.taylor(at)oracle.com
    Each time a merchandiser, buyer or planner in Retail makes a business decision around assortment, inventory, pricing and promotions there is an opportunity to improve both Profitability and Customer Service. Improving decision making, however, has always been a tricky business for retailers.  I have worked in this space for more than 15 years. I began my career as an academic, at Imperial College London, and then broadened this interest with Retailers, aiming to optimize their merchandising and supply chain decisions. Planning the business and optimizing profit is a complex process. The complexity arises from the variety of people involved, the large number of decisions to take across all business processes, the uncertainty intrinsic to the retail environment as well as the volume of data available for analysis.  Things are not getting any easier either. The advent of multi-channel, social media and mobile is taking these complexities to a new level and presenting additional opportunities for those willing to exploit them. I guess it is due to the complexities of the decision making process that, over the last couple of years working with Oracle Retail, I have witnessed a clear trend around the deployment of planning systems. Retailers are aiming to simplify their decision making processes. They want to use one joined up planning platform across the business and enhance it with "actionable" data mining and optimization techniques. At Oracle Retail, we have a vibrant community of international retailers who regularly come together to discuss the big issues in retail planning. It is a combination of fashion, grocery and speciality retailers, all sharing their best practice vision for planning and optimizing merchandise decisions. As part of the Retail Exchange program, at the recent National Retail Federation event in New York, I jointly hosted a Planning dinner with Peter Fitzgerald from Google UK, Retail Division. Those retailers from our international planning community who were in New York for the annual NRF event were able to attend. The group comprised some of Europe's great International Retail brands.  All sectors were represented by organisations like Mango, LVMH, Ahold, Morrisons, Shop Direct and River Island. They confirmed the current importance of engaging with Planning and Optimization issues. In particular the impact of the internet was a key topic. We had a great debate about new retail initiatives.  Peter highlighted how mobility is changing retail - in particular with the new "local availability search" initiative. We also had an exciting discussion around the opportunities to improve merchandising using the new data that is becoming available from search, social media and ecommerce sites. It will be our focus to continue to help retailers translate this data into better results while keeping their business operations simple. New developments in "actionable" analytics and computing capacity make this a very exciting area today. Watch this space for my contributions on these topics which will be made available through this blog. Oracle Retail has a strong Planning community. if you are a category manager, a planner, a buyer, a merchandiser, a retail supplier or any retail executive with a keen interest in planning then you would be very welcome to join Oracle Retail's Planning Community. As part of our community you will be able to join our in-person and virtual events, download topical white papers and best practice information specifically tailored to your area of interest.  If anyone would like to register their interest in joining our community of retailers discussing planning then please contact me at [email protected]   Willy Rotstein, Oracle Retail

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Android - passing data between Activities

    - by Bill Osuch
    (To follow along with this, you should understand the basics of starting new activities: Link ) The easiest way to pass data from one activity to another is to create your own custom bundle and pass it to your new class. First, create two new activities called Search and SearchResults (make sure you add the second one you create to the AndroidManifest.xml file!), and create xml layout files for each. Search's file should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:layout_width="fill_parent"      android:layout_height="wrap_content"      android:text="Name:"/>     <EditText                android:id="@+id/edittext"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="ID Number:"/>     <EditText                android:id="@+id/edittext2"                android:layout_width="fill_parent"                android:layout_height="wrap_content"/>     <Button           android:id="@+id/btnSearch"          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="Search" /> </LinearLayout> and SearchResult's should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:id="@+id/txtName"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:id="@+id/txtState"         android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="No data"/> </LinearLayout> Next, we'll override the OnCreate method of Search: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search);     Button search = (Button) findViewById(R.id.btnSearch);     search.setOnClickListener(new View.OnClickListener() {         public void onClick(View view) {                           Intent intent = new Intent(Search.this, SearchResults.class);              Bundle b = new Bundle();                           EditText txt1 = (EditText) findViewById(R.id.edittext);             EditText txt2 = (EditText) findViewById(R.id.edittext2);                                      b.putString("name", txt1.getText().toString());             b.putInt("state", Integer.parseInt(txt2.getText().toString()));                              //Add the set of extended data to the intent and start it             intent.putExtras(b);             startActivity(intent);          }     }); } This is very similar to the previous example, except here we're creating our own bundle, adding some key/value pairs to it, and adding it to the intent. Now, to retrieve the data, we just need to grab the Bundle that was passed to the new Activity and extract our values from it: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search_results);     Bundle b = getIntent().getExtras();     int value = b.getInt("state", 0);     String name = b.getString("name");             TextView vw1 = (TextView) findViewById(R.id.txtName);     TextView vw2 = (TextView) findViewById(R.id.txtState);             vw1.setText("Name: " + name);     vw2.setText("State: " + String.valueOf(value)); }

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • mongo mapper with STI with more than one type?

    - by holden
    I have a series of models all which inherit from a base model Properties For example Bars, Restaurants, Cafes, etc. class Property include MongoMapper::Document key :name, String key :_type, String end class Bar < Property What I'm wondering is what to do with the case when a record happens to be both a Bar & a Restaurant? Is there a way for a single object to inherit the attributes of both models? And how would it work with the key :_type?

    Read the article

  • Why No android.content.SyncAdapter meta-data registering sync-adapter?

    - by mobibob
    I am following the SampleSyncAdapter and upon startup, it appears that my SyncAdapter is not configured correctly. It reports an error trying to load its meta-data. How can I isolate the problem? You can see the other accounts in the system that register correctly. Logcat: 12-21 17:10:50.667 W/PackageManager( 121): Unable to load service info ResolveInfo{4605dcd0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.667 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.667 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.667 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570) 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.skype.contacts.sync, packageName=com.skype.raider 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.twitter.android.auth.login, packageName=com.twitter.android 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.example.android.samplesync, packageName=com.example.android.samplesync 12-21 17:10:50.747 W/PackageManager( 121): Unable to load service info ResolveInfo{460504b0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.747 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.747 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.747 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570)

    Read the article

  • Failed to save to data store: The operation couldn’t be completed. (Cocoa error 133020.)

    - by poslinski.net
    I am working on quite complex app with huge sync procedure beetwen iphone and web server. I have no troubles with adding records, until I run sync procedure in separete thread, and It will update data on serwer, and send them back to iphone. But after this procedure, inserting new data cause error, such as this: 2011-01-07 12:49:10.722 App[1987:207] Failed to save to data store: The operation couldn’t be completed. (Cocoa error 133020.) 2011-01-07 12:49:10.724 App[1987:207] { conflictList = ( "NSMergeConflict (0x5ac1ea0) for NSManagedObject (0x5a2d710) with objectID '0x5a27080 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/uzytkownicy/p10>' with oldVersion = 9 and newVersion = 21 and old object snapshot = {\n adres = \"<null>\";\n haslo = xxxxxxxxxxxxxxxxxxxxxx;\n \"id_uzytkownika\" = 3;\n imie = Jan;\n \"kod_jednorazowy\" = 0;\n komorka = \"<null>\";\n login = nowakjan;\n nazwisko = Nowak;\n pesel = 0;\n rodzaj = 2;\n \"stan_konta\" = 0;\n telefon = \"<null>\";\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"zmienna_losowa\" = 8G9e1;\n} and new cached row = {\n adres = \"<null>\";\n haslo = xxxxxxxxxxxxxxxxxxxxxx;\n \"id_uzytkownika\" = 3;\n imie = Jan;\n \"kod_jednorazowy\" = 0;\n komorka = \"<null>\";\n login = nowakjan;\n nazwisko = Nowak;\n pesel = 0;\n rodzaj = 2;\n \"stan_konta\" = 0;\n telefon = \"<null>\";\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"zmienna_losowa\" = 8G9e1;\n}", "NSMergeConflict (0xd266990) for NSManagedObject (0xcd05950) with objectID '0x5a453b0 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/uczniowie/p125>' with oldVersion = 5 and newVersion = 10 and old object snapshot = {\n adres = \"Warszawa; ul. Lwowska 32\";\n \"data_urodzenia\" = \"1997-02-01 23:00:00 +0000\";\n dysfunkcje = \"\";\n email = \"<null>\";\n frekwencja = 0;\n \"id_ucznia\" = 86;\n imie2 = Marian;\n \"imie_ucznia\" = \"S\\U0142awomir\";\n klasa = \"0x5a47820 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/zespoly/p9>\";\n komorka = \"<null>\";\n \"miejsce_urodzenia\" = Warszawa;\n \"nazwisko_ucznia\" = \"S\\U0142awek\";\n \"numer_ewidencyjny\" = 20;\n opiekun1 = \"Mariusz S\\U0142awek\";\n opiekun2 = \" \";\n pesel = 97020298919;\n plec = 1;\n telefon = 890000002;\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"web_klasa\" = 50;\n} and new cached row = {\n adres = \"Warszawa; ul. Lwowska 32\";\n \"data_urodzenia\" = \"1997-02-01 23:00:00 +0000\";\n dysfunkcje = \"\";\n email = \"<null>\";\n frekwencja = 0;\n \"id_ucznia\" = 86;\n imie2 = Marian;\n \"imie_ucznia\" = \"S\\U0142awomir\";\n klasa = \"0x5a8e7c0 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/zespoly/p9>\";\n komorka = \"<null>\";\n \"miejsce_urodzenia\" = Warszawa;\n \"nazwisko_ucznia\" = \"S\\U0142awek\";\n \"numer_ewidencyjny\" = 20;\n opiekun1 = \"Mariusz S\\U0142awek\";\n opiekun2 = \" \";\n pesel = 97020298919;\n plec = 1;\n telefon = 890000002;\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"web_klasa\" = 50;\n}", "NSMergeConflict (0xd2669b0) for NSManagedObject (0x5a44480) with objectID '0x5a47830 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/przedmioty/p12>' with oldVersion = 7 and newVersion = 15 and old object snapshot = {\n \"id_przedmiotu\" = 1;\n \"nazwa_przedmiotu\" = Historia;\n \"skrot_nazwy\" = Hist;\n} and new cached row = {\n \"id_przedmiotu\" = 1;\n \"nazwa_przedmiotu\" = Historia;\n \"skrot_nazwy\" = Hist;\n}" ); } I've been looking for any solution but without luck. Thank You in advance for any usefull help.

    Read the article

  • Which Python API should be used with Mongo DB and Django

    - by Thomas
    I have been going back and forth over which Python API to use when interacting with Mongo. I did a quick survey of the landscape and identified three leading candidates. PyMongo MongoEngine Ming If you were designing a new content-heavy website using the django framework, what API would you choose and why? MongoEngine looks like it was built specifically with Django in mind. PyMongo appears to be a thin wrapper around Mongo. It has a lot of power, though loses a lot of the abstractions gained through using django as a framework. Ming represents an interesting middle ground between PyMongo and MongoEngine, though I haven't had the opportunity to take it for a test drive.

    Read the article

  • Creating a form for editing embedded documents with MongoMapper

    - by Luke Francl
    I'm playing around with MongoMapper but I'm having trouble figuring out how to create a form for an object that has embedded documents. With ActiveRecord, I'd use fields_for but when asked if this would be supported a few months ago, MongoMapper author John Nunemaker wrote: "Nope and nope. It is really [not] that hard with attr_accessor's." OK, fair enough, but how do you write the form for this to work? I'm not interested in using the nested form implementations that are out there because I want to do this the "normal" way as I'm learning about MongoMapper. My model is simple enough - I've got a Person with embedded documents for email addresses, phone numbers, etc. I do not care about updating existing embedded documents. They can be re-created from the form input each time a Person is edited.

    Read the article

  • PlayFramework with Scala and Morphia

    - by AKRamkumar
    I keep getting this exception: Oops: CannotCompileException An unexpected error occured caused by exception CannotCompileException: [source error] ds() not found in models.dc What is wrong with my code? Here is models.ds package models import com.google.code.morphia.annotations._ @Embedded class ds{ @Indexed var xs : Double=0 @Indexed var xc : Double=0 @Indexed var ys : Double=0 @Indexed var yc : Double=0 @Indexed var zs : Double=0 @Indexed var zc : Double=0 } Here is models.dc package models import com.google.code.morphia.annotations.{Embedded, Entity, Indexed} @Entity class dc{ @Indexed var name : String = null @Embedded var summary : ds = new ds() }

    Read the article

  • How can i generate mongoid.yml config in Rail 2.3.5?

    - by Micke
    As the title says, how can i generate the default mongoid.yml config file on Rail 2.3.5? I try to use te ´rails generate mongoid:config´ command but it just generates a new app. And also, i would like to use has_many in mongoid without embeding the associated model in the same field. I would like them to be in seperate fields and associated through a *_id "column". Is that possible? Thank you, Micke.

    Read the article

  • MongoMapper, Rails, Increment Works in Console but not Controller

    - by Michael Waxman
    I'm using mongo_mapper 0.7.5 and rails 2.3.8, and I have an instance method that works in my console, but not in the controller of my actual app. I have no idea why this is. #controller if @friendship.save user1.add_friend(user2) ... #model ... key :friends_count, Integer, :default => 0 key :followers_count, Integer, :default => 0 def add_friend(user) ... self.increment(:friends_count => 1) user.increment(:followers_count => 1) true end And this works in the console, but in the controller it does not change the follower count, only the friends count. What gives? The only thing I can even think of is that the way that I'm passing the user in is the problem, but I'm not sure how to fix it.

    Read the article

  • Encoding::UndefinedConversionError from email body

    - by raam86
    using mail for ruby I am getting this message: mail.rb:22:in `encode': "\xC7" from ASCII-8BIT to UTF-8 (Encoding::UndefinedConversionError) from mail.rb:22:in `<main>' If I remove encode I get a message ruby /var/lib/gems/1.9.1/gems/bson-1.7.0/lib/bson/bson_ruby.rb:63:in `rescue in to_utf8_binary': String not valid utf-8: "<div dir=\"ltr\"><div class=\"gmail_quote\">l<br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div class=\"gmail_quote\"><br><br><br><div dir=\"ltr\"><div dir=\"rtl\">\xC7\xE1\xE4\xD5 \xC8\xC7\xE1\xE1\xDB\xC9 \xC7\xE1\xDA\xD1\xC8\xED\xC9</div></div>\r\n</div><br></div>\r\n</div><br></div>\r\n</div><br></div>" (BSON::InvalidStringEncoding) This is my code: require 'mail' require 'mongo' connection = Mongo::Connection.new db = connection.db("DB") db = Mongo::Connection.new.db("DB") newsCollection = db["news"] Mail.defaults do retriever_method :pop3, :address => "pop.gmail.com", :port => 995, :user_name => 'my_username', :password => '*****', :enable_ssl => true end emails = Mail.last #Checks if email is multipart and decods accordingly. Put to extract UTF8 from body plain_part = emails.multipart? ? (emails.text_part ? emails.text_part.body.decoded : nil) : emails.body.decoded html_part = emails.html_part ? emails.html_part.body.decoded : nil mongoMessage = {"date" => emails.date.to_s , "subject" => emails.subject , "body" => plain_part.encode('UTF-8') } msgID = newsCollection.insert(mongoMessage) #add the document to the database and returns it's ID puts msgID For English and Hebrew it works perfectly but it seems gmail is sending arabic with different encoding. Replacing UTF-8 with ASCII-8BIT gives a similar error. I get the same result when using plain_part for plain email messages. I am handling emails from one specific source so I can put html_part with confidence it's not causing the error. To make it extra weird Subject in Arabic is rendered perfectly. What encoding should I use?

    Read the article

  • NoRM MongoConfiguration

    - by user365836
    Something is wrong with my MongoConfiguration I think, I read the following on NoRM's google group: On Mar 22, 3:14 am, Andrew Theken wrote: It can easily live in the object: public class POCO{ //implicitly runs one time. static POCO(){   MongoConfiguration.Initialize(cfg= cfg.For<POCO>(f=     {        f.ForProperty(p=p.ALongProperty).UseAlias("A");        f.ForProperty(p=p.AnotherLongProperty).UseAlias("B");        //etc.     }    }    public string ALongProperty {get;set;}    public int AnotherLongProperty{get;set;} } ;-) //Andrew Theken In my class I have public class Item { static Item() { MongoConfiguration.Initialize( cfg=> cfg.For<Item>( c => { c.ForProperty(i => i.Name).UseAlias("N"); c.ForProperty(i => i.Description).UseAlias("D"); }) ); } public String Name { get; set; } public String Description { get; set; } } After I save them there are items in DB with short key namse, the problem is when I try to read up all previously created items on start I get no items back.

    Read the article

  • 'schema' design for a social network

    - by Alan B
    I'm working on a proof of concept app for a twitter style social network with about 500k users. I'm unsure of how best to design the 'schema' should I embed a user's subscriptions or have a separate 'subscriptions' collection and use db references? If I embed, I still have to perform a query to get all of a user's followers. e.g. Given the following user: { "username" : "alan", "photo": "123.jpg", "subscriptions" : [ {"username" : "john", "status" : "accepted"}, {"username" : "paul", "status" : "pending"} ] } to find all of alan's subscribers, I'd have to run something like this: db.users.find({'subscriptions.username' : 'alan'}); from a performance point of view, is that any worse or better than having a separate subscriptions collection? also, when displaying a list of subscriptions/subscribers, I am currently having problems with n+1 because the subscription document tells me the username of the target user but not other attributes I may need such as the profile photo. Are there any recommended practices for such situations? thanks Alan

    Read the article

  • Rails + MongoMapper + EmbeddedDocument form help

    - by Bob Martens
    I am working on a pretty simple web application (famous last words) and am working with Rails 2.3.5 + MongoMapper 0.7.2 and using embedded documents. I have two questions to ask: First, are there any example applications out there using Rails + MongoMapper + EmbeddedDocument? Preferably on GitHub or some other similar site so that I can take a look at the source and see where I am supposed to head? If not ... ... what is the best way to approach this task? How would I go about creating a form to handle an embedded document. What I am attempting to do is add addresses to users. I can toss up the two models in question if you would like. Thanks for the help!

    Read the article

  • Mongomapper query collection problem

    - by kylemac
    When I define the User has_many meetings, it automatically creates a "user_id" key/value pair to relate to the User collections. Except I can't run any mongo_mapper finds using this value, without it returning nil or []. Meeting.first(:user_id = "1234") Meeting.all(:user_id = "1234") Meeting.find(:user_id = "1234") All return nil. Is there another syntax? Basically I can't run a query on the automatically generated associative ObjectId. # Methods class User include MongoMapper::Document key :user_name, String, :required = true key :password, String many :meetings end class Meeting include MongoMapper::Document key :name, String, :required = true key :count, Integer, :default = 1 end # Sinatra get '/add' do user = User.new user.meetings "foobar") #should read: Meeting.new(:name = "foobar") user.save end get '/find' do test = Meeting.first(:user_id = "4b4f9d6d348f82370b000001") #this is the _id of the newly create user p test # WTF! returns [] end

    Read the article

  • Getting-started: Setup Database for Node.js

    - by Emile Petrone
    I am new to node.js but am excited to try it out. I am using Express as a web framework, and Jade as a template engine. Both were easy to get setup following this tutorial from Node Camp. However the one problem I am finding is I can't find a simple tutorial for getting a DB set up. I am trying to build a basic chat application (store session and message). Does anyone know of a good tutorial? This other SO post talks about dbs to use- but as this is very different from the Django/MySQL world I've been in, I want to make sure I understand what is going on. Thanks!

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • iPhones SDK: Setting a relationship property object using core data?

    - by Harkonian
    I'm using core data in my app. I have two entities that are related: EntityA and EntityB. EntityA has a property of type "relationship" with EntityB. In addition, both of these entities are defined classes (not the default NSManagedObject). I'm inserting a new object into my data like this: EntityA *newEntityA = [NSEntityDescription insertNewObjectForEntityForName:@"EntityA" inManagedObjectContext:self.managedObjectContext]; newEntityA.name = @"some name"; newEntityA.entityB.name = @"some other name"; The problem is entityB.name is null. Even if I add an NSLog() statement right after assigning the value, it is null. What is the proper way of setting my "name" property of EntityB when EntityB is a property of EntityA?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >