Search Results

Search found 12222 results on 489 pages for 'initial context'.

Page 74/489 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • NullPointerException while raise an embedded ldap server using spring

    - by omer c
    Hello, I'm trying to raise the Spring Embedded Ldap Server using: But I'm keep on getting this exception: 2010-06-10 14:33:35,559 ERROR main ApacheDSContainer start - Server startup failed java.lang.NullPointerException at org.apache.directory.server.core.schema.DefaultSchemaService.initialize(DefaultSchemaService.java:382) at org.apache.directory.server.core.DefaultDirectoryService.initialize(DefaultDirectoryService.java:1425) at org.apache.directory.server.core.DefaultDirectoryService.startup(DefaultDirectoryService.java:907) at org.springframework.security.ldap.server.ApacheDSContainer.start(ApacheDSContainer.java:160) at org.springframework.security.ldap.server.ApacheDSContainer.afterPropertiesSet(ApacheDSContainer.java:113) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1469) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1409) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:190) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:563) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:872) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3764) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4212) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:740) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626) at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:448) at org.apache.catalina.core.StandardServer.start(StandardServer.java:700) at org.apache.catalina.startup.Catalina.start(Catalina.java:552) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:295) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:433) I'm using spring 3.0.2 and added the following jars for the ldap: spring-security-ldap-3.0.2.RELEASE.jar spring-ldap-1.3.0.RELEASE-all.jar apacheds-all-1.5.6.jar shared-ldap-0.9.15.jar slf4j-api-1.5.6.jar slf4j-simple-1.5.6.jar Help please....

    Read the article

  • getView() (for a Custom ListView ) doesn't get called on notifyDatasetChanged()

    - by hungson175
    Hi everyone, I have the following problem, and searched for a while but haven't got any solution from the net: I have a custom list view, each item has the following layout (I just post the essential): <LinearLayout> <ImageView android:id="@+id/friendlist_iv_avatar" /> <TextView andorid:id="@+id/friendlist_tv_nick_name" /> <ImageView android:id="@+id/friendlist_iv_status_icon" /> </LinearLayout> And I have a class FriendRowItem, which is inflated from the above layout: public class FriendRowItem extends LinearLayout{ private ImageView ivAvatar; private ImageView ivStatusIcon; private TextView tvNickName; public FriendRowItem(Context context) { super(context); RelativeLayout friendRow = (RelativeLayout) Helpers.inflate(context, R.layout.friendlist_row); this.addView(friendRow); ivAvatar = (ImageView)findViewById(R.id.friendlist_iv_avatar); ivStatusIcon = (ImageView)findViewById(R.id.friendlist_iv_status_icon); tvNickName = (TextView)findViewById(R.id.friendlist_tv_nick_name); } public void setPropeties(Friend friend) { //Avatar ivAvatar.setImageResource(friend.getAvatar().getDrawableResourceId()); //Status Status.Type status = friend.getStatusType(); if ( status == Type.ONLINE) { ivStatusIcon.setImageResource(R.drawable.online_icon); } else { ivStatusIcon.setImageResource(R.drawable.offline_icon); } //Nickname String name = friend.getChatID(); if ( friend.hasName()) { name = friend.getName(); } tvNickName.setText(name); } } In the main activity, I have a custom listview: lvMainListView, with an custom adapter (whose class extends ArrayAdapter - and off course: override the method getView ), the data set of the adapter is: ArrayList<Friend> friends: private class FriendRowAdapter extends ArrayAdapter<Friend> { public FriendRowAdapter(Context applicationContext, int friendlistRow, ArrayList<Friend> friends) { super(applicationContext, friendlistRow, friends); } @Override public View getView(int position,View convertView,ViewGroup parent) { Friend friend = getItem(position); FriendRowItem row = (FriendRowItem) convertView; if ( row == null ) { row = new FriendRowItem(ShowFriendsList.this.getApplicationContext()); } row.setPropeties( friend ); return row; } } the problem is when I change the status of a friend from OFFLINE to ONLINE, then call notifyDataSetChanged(), nothing happens : the status icon of that friend doesn't change. I tried debugging, and saw the code: notifyDataSetChanged() get called, but the custom getView() is not fired ! Can you please tell me, that is normal in Android, or did I do something wrong ? (I am using Android 1.5). Thank you in advance, Son

    Read the article

  • ASP.NET MVC, Webform hybrid

    - by Greg Ogle
    We (me and my team) have a ASP.NET MVC application and we are integrating a page or two that are Web Forms. We are trying to reuse the Master Page from our MVC part of the app in the WebForms part. We have found a way of rendering an MVC partial view in web forms, which works great, until we try and do a postback, which is the reason for using a WebForm. The Error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. The Code to render the partial view from a WebForm (credited to "How to include a partial view inside a webform"): public static class WebFormMVCUtil { public static void RenderPartial(string partialName, object model) { //get a wrapper for the legacy WebForm context var httpCtx = new HttpContextWrapper(System.Web.HttpContext.Current); //create a mock route that points to the empty controller var rt = new RouteData(); rt.Values.Add("controller", "WebFormController"); //create a controller context for the route and http context var ctx = new ControllerContext( new RequestContext(httpCtx, rt), new WebFormController()); //find the partial view using the viewengine var view = ViewEngines.Engines.FindPartialView(ctx, partialName).View; //create a view context and assign the model var vctx = new ViewContext(ctx, view, new ViewDataDictionary { Model = model }, new TempDataDictionary()); //ERROR OCCURS ON THIS LINE view.Render(vctx, System.Web.HttpContext.Current.Response.Output); } } My only experience with this error is in context of a web farm, which is not the case. Also, I understand that the machine key is used for decrypting the ViewState. Any information on how to diagnose this issue would be appreciated. A Work-around: So far the work-around is to move the header content to a PartialView, then use an AJAX call to call a page with just the Partial View from the WebForms, and then using the PartialView directly on the MVC Views. Also, we are still able to share non-tech-specific parts of the Master Page, i.e. anything that is not MVC specific. Still yet, this is not an ideal solution, a server-side solution is still desired. Also, this solutino has issues when working with controls that have more sophisticated controls, using JavaScript, particularly dynamically generated script as used by 3rd party controls.

    Read the article

  • ArrayIndexOutOfBoundsException with custom Android Adapter for multiple views in ListView

    - by Dan Watling
    I am attempting to create a custom Adapter for my ListView since each item in the list can have a different view (a link, toggle, or radio group), but when I try to run the Activity that uses the ListView I receive an error and the app stops. The application is targeted for the Android 1.6 platform. The code: public class MenuListAdapter extends BaseAdapter { private static final String LOG_KEY = MenuListAdapter.class.getSimpleName(); protected List<MenuItem> list; protected Context ctx; protected LayoutInflater inflater; public MenuListAdapter(Context context, List<MenuItem> objects) { this.list = objects; this.ctx = context; this.inflater = (LayoutInflater)this.ctx.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } @Override public View getView(int position, View convertView, ViewGroup parent) { Log.i(LOG_KEY, "Position: " + position + "; convertView = " + convertView + "; parent=" + parent); MenuItem item = list.get(position); Log.i(LOG_KEY, "Item=" + item ); if (convertView == null) { convertView = this.inflater.inflate(item.getLayout(), null); } return convertView; } @Override public boolean areAllItemsEnabled() { return false; } @Override public boolean isEnabled(int position) { return true; } @Override public int getCount() { return this.list.size(); } @Override public MenuItem getItem(int position) { return this.list.get(position); } @Override public long getItemId(int position) { return position; } @Override public int getItemViewType(int position) { Log.i(LOG_KEY, "getItemViewType: " + this.list.get(position).getLayout()); return this.list.get(position).getLayout(); } @Override public int getViewTypeCount() { Log.i(LOG_KEY, "getViewTypeCount: " + this.list.size()); return this.list.size(); } } The error I receive: java.lang.ArrayIndexOutOfBoundsException at android.widget.AbsListView$RecycleBin.addScrapView(AbsListView.java:3523) at android.widget.ListView.measureHeightOfChildren(ListView.java:1158) at android.widget.ListView.onMeasure(ListView.java:1060) at android.view.View.measure(View.java:7703) I do know that the application is returning from getView and everything seems in order. Any ideas on what could be causing this would be appreciated. Thanks, -Dan

    Read the article

  • hyperLink on jsf error messages problem

    - by user234194
    I am trying to put link in the error messages produced by JSF. For this I am using the custom renderer, it works(clicking the error, focuses the respective input field) but the problem is , all the form values gets empty. ie when error occurs, all the input fields get empty. Any suggestion will be appreciated. package custom; public class CustomErrorRenderer extends Renderer { @Override @SuppressWarnings("unchecked") public void encodeEnd(FacesContext context, UIComponent component) throws IOException { ResponseWriter writer = context.getResponseWriter(); writer.startElement("div", component); writer.writeAttribute("id", component.getClientId(context), "id"); writer.writeAttribute("style", "color: red", null); writer.startElement("ul", null); Iterator clientIds = context.getClientIdsWithMessages(); while (clientIds.hasNext()) { String clientId = clientIds.next(); Iterator messages = context.getMessages(clientId); if (!messages.hasNext()) { continue; } String javaScript = "var field = document.getElementById('" + clientId + "');" + "if(field == null) return false;" + "field.focus(); return false;"; writer.startElement("li", null); writer.startElement("a", null); writer.writeAttribute("onclick", javaScript, null); writer.writeAttribute("href", "#", null); while (messages.hasNext()) { writer.writeText(messages.next().getSummary(), null); } writer.endElement("a"); writer.endElement("li"); } writer.endElement("ul"); writer.endElement("div"); } } This renderer is defined in faces-config.xml: add to base HTML_BASIC renderkit HTML_BASIC HTML_BASIC CustomErrorRenderer javax.faces.Output custom.CustomErrorRenderer custom.CustomErrorRenderer CustomErrorMessages custom.Errors javax.faces.component.UIOutput a tag class: package custom; import javax.faces.webapp.UIComponentELTag; public class CustomErrorTag extends UIComponentELTag { @Override public String getComponentType() { return "custom.Errors"; } @Override public String getRendererType() { return "custom.CustomErrorRenderer"; } } This is defined in a TLD file: http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd" version="2.1" 1.0 custom http://custom errors custom.CustomErrorTag empty This goes at the top of the JSP page: <%@ taglib prefix="custom" uri="http://custom"%

    Read the article

  • Can LINQ-to-SQL omit unspecified columns on insert so a database default value is used?

    - by Todd Ropog
    I have a non-nullable database column which has a default value set. When inserting a row, sometimes a value is specified for the column, sometimes one is not. This works fine in TSQL when the column is omitted. For example, given the following table: CREATE TABLE [dbo].[Table1]( [id] [int] IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NOT NULL, [col2] [nvarchar](50) NULL, CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([id] ASC) ) GO ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [DF_Table1_col1] DEFAULT ('DB default') FOR [col1] The following two statements will work: INSERT INTO Table1 (col1, col2) VALUES ('test value', '') INSERT INTO Table1 (col2) VALUES ('') In the second statement, the default value is used for col1. The problem I have is when using LINQ-to-SQL (L2S) with a table like this. I want to produce the same behavior, but I can't figure out how to make L2S do that. I want to be able to run the following code and have the first row get the value I specify and the second row get the default value from the database: var context = new DataClasses1DataContext(); var row1 = new Table1 { col1 = "test value", col2 = "" }; context.Table1s.InsertOnSubmit(row1); context.SubmitChanges(); var row2 = new Table1 { col2 = "" }; context.Table1s.InsertOnSubmit(row2); context.SubmitChanges(); If the Auto Generated Value property of col1 is False, the first row is created as desired, but the second row fails with a null error on col1. If Auto Generated Value is True, both rows are created with the default value from the database. I've tried various combinations of Auto Generated Value, Auto-Sync and Nullable, but nothing I've tried gives the behavior I want. L2S does not omit the column from the insert statement when no value is specified. Instead it does something like this: INSERT INTO Table1 (col1, col2) VALUES (null, '') ...which of course causes a null error on col1. Is there some way to get L2S to omit a column from the insert statement if no value is given? Or is there some other way to get the behavior I want? I need the default value at the database level because not all row inserts are done via L2S, and in some cases the default value is a little more complex than a hard coded value (e.g. creating the default based on another field) so I'd rather avoid duplicating that logic.

    Read the article

  • OSGI classcast exception on felix

    - by Nico
    Hi, i'm fairly new to osgi and am trying to get a functional proof of concept together. The setup is that my common api is created in a bundle creatively named common-api.jar with no bundle activator, but it exports all it's interfaces. the one of interest in this situation is DatabaseService.java. I then have a Second bundle called systemx-database-service. That implements the database service interface. this works fine as in the activator of the implementation bundle i test the connection to the database and select some arbitraty values. I also register the service i want to be available to the other bundle's like so: context.registerService(DatabaseService.class.getName(), new SystemDatabaseServiceImpl(context), new Properties()); The basic idea being when you look for a service reference for a Database service you'll get back the SystemDatabaseService implementation. When i do a inspect service the output it this: -> inspect s c 69 System Database Service (69) provides services: ---------------------------------------------- objectClass = za.co.xxx.xxx.common.api.DatabaseService service.id = 39 which would lead me to believe that if i do this in a test bundle: context.getService(context.getServiceReference(DatabaseService.class)); i should get back an instance of DatabaseService.class, but alas no such luck. it simply seems like it cannot find the service. stick with me here my story gets stranger. figuring there is no where to go but up i wrote this monstrosity: for (Bundle bundle : bundles) { if (bundle.getSymbolicName().equals("za.co.xxx.xxx.database-service")) { ServiceReference[] registeredServices = bundle.getRegisteredServices(); for (ServiceReference ref : registeredServices) { DatabaseService service = (DatabaseService) context.getService(ref); // use service here. } } } } now i can actually see the service reference, but i get this error java.lang.ClassCastException: za.co.xxx.xxx.database.service.impl.SystemDatabaseServiceImpl cannot be cast to za.co.xxx.xx.common.api.DatabaseService which is crazy since the implementation clearly implements the interface! Any help would be appreciated. Please keep in mind i'm very new at the osgi way of thinking so my whole approach here might be flawed. oh. if anyone wants the manifests i can post them. and i'm using the maven-bnd-plugin to build and executing on felix. thanks Nico

    Read the article

  • Android: database reading problem throws exception

    - by Vamsi
    Hi, i am having this problem with the android database. I adopted the DBAdapter file the NotepadAdv3 example from the google android page. DBAdapter.java public class DBAdapter { private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "PasswordDb"; private static final String DATABASE_TABLE = "myuserdata"; private static final String DATABASE_USERKEY = "myuserkey"; private static final int DATABASE_VERSION = 2; public static final String KEY_USERKEY = "userkey"; public static final String KEY_TITLE = "title"; public static final String KEY_DATA = "data"; public static final String KEY_ROWID = "_id"; private final Context mContext; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DB_CREATE_KEY = "create table " + DATABASE_USERKEY + " (" + "userkey text not null" +");"; private static final String DB_CREATE_DATA = "create table " + DATABASE_TABLE + " (" + "_id integer primary key autoincrement, " + "title text not null" + "data text" +");"; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DB_CREATE_KEY); db.execSQL(DB_CREATE_DATA); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS myuserkey"); db.execSQL("DROP TABLE IF EXISTS myuserdata"); onCreate(db); } } public DBAdapter(Context ctx) { this.mContext = ctx; } public DBAdapter Open() throws SQLException{ try { mDbHelper = new DatabaseHelper(mContext); } catch(Exception e){ Log.e(TAG, e.toString()); } mDb = mDbHelper.getWritableDatabase(); return this; } public void close(){ mDbHelper.close(); } public Long storeKey(String userKey){ ContentValues initialValues = new ContentValues(); initialValues.put(KEY_USERKEY, userKey); try { mDb.delete(DATABASE_USERKEY, "1=1", null); } catch(Exception e) { Log.e(TAG, e.toString()); } return mDb.insert(DATABASE_USERKEY, null, initialValues); } public String retrieveKey() { final Cursor c; try { c = mDb.query(DATABASE_USERKEY, new String[] { KEY_USERKEY}, null, null, null, null, null); }catch(Exception e){ Log.e(TAG, e.toString()); return ""; } if(c.moveToFirst()){ return c.getString(0); } else{ Log.d(TAG, "UserKey Empty"); } return ""; } //not including any function related to "myuserdata" table } Class1.java { mUserKey = mDbHelper.retrieveKey(); mDbHelper.storeKey(Key); } the error that i am receiving is from Log.e(TAG, e.toString()) in the methods retrieveKey() and storeKey() "no such table: myuserkey: , while compiling: SELECT userkey FROM myuserkey"

    Read the article

  • Django facebook integration error

    - by Gaurav
    I'm trying to integrate facebook into my application so that users can use their FB login to login to my site. I've got everything up and running and there are no issues when I run my site using the command line python manage.py runserver But this same code refuses to run when I try and run it through Apache. I get the following error: Environment: Request Method: GET Request URL: http://helvetica/foodfolio/login Django Version: 1.1.1 Python Version: 2.6.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'foodfolio.app', 'foodfolio.facebookconnect'] Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'facebook.djangofb.FacebookMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'facebookconnect.middleware.FacebookConnectMiddleware') Template error: In template /home/swat/website-apps/foodfolio/facebookconnect/templates/facebook/js.html, error at line 2 Caught an exception while rendering: No module named app.models 1 : <script type="text/javascript"> 2 : FB_RequireFeatures(["XFBML"], function() {FB.Facebook.init("{{ facebook_api_key }}", " {% url facebook_xd_receiver %} ")}); 3 : 4 : function facebookConnect(loginForm) { 5 : FB.Connect.requireSession(); 6 : FB.Facebook.get_sessionState().waitUntilReady(function(){loginForm.submit();}); 7 : } 8 : function pushToFacebookFeed(data){ 9 : if(data['success']){ 10 : var template_data = data['template_data']; 11 : var template_bundle_id = data['template_bundle_id']; 12 : feedTheFacebook(template_data,template_bundle_id,function(){}); Traceback: File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/swat/website-apps/foodfolio/app/controller.py" in __showLogin__ 238. context_instance = RequestContext(request)) File "/usr/lib/pymodules/python2.6/django/shortcuts/__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/usr/lib/pymodules/python2.6/django/template/loader.py" in render_to_string 108. return t.render(context_instance) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 178. return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 71. result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 946. autoescape=context.autoescape)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py" in render 779. bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /foodfolio/login Exception Value: Caught an exception while rendering: No module named app.models

    Read the article

  • How to structure game states in an entity/component-based system

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something? EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one: public class BoundsComponent implements GameComponent { /** * The position of the game object. */ private Point pos_; /** * The size of the game object. */ private Dimension size_; /** * Constructor * Creates a new BoundsComponent for a game object with initial position * initialPos and initial size initialSize. The position and size combine * to make up the bounds. * @param initialPos The initial position of the game object. * @param initialSize The initial size of the game object. */ public BoundsComponent(Point initialPos, Dimension initialSize) { pos_ = initialPos; size_ = initialSize; } /** * Method: getBounds * @return The bounds of the game object. */ public Rectangle getBounds() { return new Rectangle(pos_, size_); } /** * Method: setPos * Sets the position of the game object to newPos. * @param newPos The value to which the position of the game object is * set. */ public void setPos(Point newPos) { pos_ = newPos; } } My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data. The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called. A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.

    Read the article

  • How to change speed without changing path travelled?

    - by Ben Williams
    I have a ball which is being thrown from one side of a 2D space to the other. The formula I am using for calculating the ball's position at any one point in time is: x = x0 + vx0*t y = y0 + vy0*t - 0.5*g*t*t where g is gravity, t is time, x0 is the initial x position, vx0 is the initial x velocity. What I would like to do is change the speed of this ball, without changing how far it travels. Let's say the ball starts in the lower left corner, moves upwards and rightwards in an arc, and finishes in the lower right corner, and this takes 5s. What I would like to be able to do is change this so it takes 10s or 20s, but the ball still follows the same curve and finishes in the same position. How can I achieve this? All I can think of is manipulating t but I don't think that's a good idea. I'm sure it's something simple, but my maths is pretty shaky.

    Read the article

  • Can I constrain a template parameter class to implement the interfaces that are supported by other?

    - by K. Georgiev
    The name is a little blurry, so here's the situation: I'm writing code to use some 'trajectories'. The trajectories are an abstract thing, so I describe them with different interfaces. So I have a code as this: namespace Trajectories { public interface IInitial < Atom > { Atom Initial { get; set; } } public interface ICurrent < Atom > { Atom Current { get; set; } } public interface IPrevious < Atom > { Atom Previous { get; set; } } public interface ICount < Atom > { int Count { get; } } public interface IManualCount < Atom > : ICount < Atom > { int Count { get; set; } } ... } Every concrete implementation of a trajectory will implement some of the above interfaces. Here's a concrete implementation of a trajectory: public class SimpleTrajectory < Atom > : IInitial < Atom >, ICurrent < Atom >, ICount < Atom > { // ICount public int Count { get; private set; } // IInitial private Atom initial; public Atom Initial { get { return initial; } set { initial = current = value; Count = 1; } } // ICurrent private Atom current; public Atom Current { get { return current; } set { current = value; Count++; } } } Now, I want to be able to deduce things about the trajectories, so, for example I want to support predicates about different properties of some trajectory: namespace Conditions { public interface ICondition &lt Atom, Trajectory &gt { bool Test(ref Trajectory t); } public class CountLessThan &lt Atom, Trajectory &gt : ICondition &lt Atom, Trajectory &gt where Trajectory : Trajectories.ICount &lt Atom &gt { public int Value { get; set; } public CountLessThan() { } public bool Test(ref Trajectory t) { return t.Count &lt Value; } } public class CurrentNormLessThan &lt Trajectory &gt : ICondition &lt Complex, Trajectory &gt where Trajectory : Trajectories.ICurrent &lt Complex &gt { public double Value { get; set; } public CurrentNormLessThan() { } public bool Test(ref Trajectory t) { return t.Current.Norm() &lt Value; } } } Now, here's the question: What if I wanted to implement AND predicate? It would be something like this: public class And &lt Atom, CondA, TrajectoryA, CondB, TrajectoryB, Trajectory &gt : ICondition &lt Atom, Trajectory &gt where CondA : ICondition &lt Atom, TrajectoryA &gt where TrajectoryA : // Some interfaces where CondB : ICondition &lt Atom, TrajectoryB &gt where TrajectoryB : // Some interfaces where Trajectory : // MUST IMPLEMENT THE INTERFACES FOR TrajectoryA AND THE INTERFACES FOR TrajectoryB { public CondA A { get; set; } public CondB B { get; set; } public bool Test(ref Trajectory t){ return A.Test(t) && B.Test(t); } } How can I say: support only these trajectories, for which the arguments of AND are ok? So I can be able to write: var vand = new CountLessThan(32) & new CurrentNormLessThan(4.0); I think if I create an orevall interface for every subset of interfaces, I could be able to do it, but it will become quite ugly.

    Read the article

  • Spring/RMI server error

    - by 4herpsand7derpsago
    We have a Spring MVC web app (WAR) deploying to Tomcat (6.0.35) that launches a thread inside a separate JVM at deploy time (don't ask why - not my design) and then communicates with that thread via RMI over port 8888. Despite being totally convoluded, this was working perfectly fine up until yesterday, and now the thread is failing at startup and despite our best efforts to add logging into the mix, we are hitting a wall. This is the only exception we are able to find in the logs: Jun 12, 2012 3:11:36 AM com.ourapp.ImageController destroy SEVERE: Shutdown Error: Lookup of RMI stub failed; nested exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused Jun 12, 2012 3:11:37 AM org.apache.catalina.core.StandardContext listenerStop SEVERE: Exception sending context destroyed event to listener instance of class org.springframework.web.context.ContextLoaderListener java.lang.NoClassDefFoundError: org/springframework/web/context/ContextCleanupListener at org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoaderListener.java:80) at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:3973) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4577) at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1165) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1271) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:296) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1337) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.ClassNotFoundException: org.springframework.web.context.ContextCleanupListener at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) ... 12 more The ImageController is the Spring MVC Controller that is responsible for kicking off this daemon/spawned RMI thread. Based on the verbage of this error, does anybody have any idea what might be causing this "connection refused" error? Running a netstat -an | grep 8888 (this is a Linux machine) produces no output which means nothing is listening on that port. Thanks in advance for any ideas/suggestions that lead to a fix. Edit: Here's another ConnectionException we're seeing: Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.<init>(Socket.java:375) at java.net.Socket.<init>(Socket.java:189) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 74 more

    Read the article

  • Android: handle unexpected internet disconnect while downloading data

    - by M.A. Cape
    Hi, I have here a function that downloads data from a remote server to file. I am still not confident with my code. My question is, what if while reading the stream and saving the data to a file and suddenly I was disconnected in the internet, will these catch exceptions below can really catch that kind of incident? If not, can you suggest how to handle this kind of incident? Note: I call this function in a thread so that the UI won't be blocked. public static boolean getFromRemote(String link, String fileName, Context context){ boolean dataReceived = false; ConnectivityManager connec = (ConnectivityManager)context.getSystemService(Context.CONNECTIVITY_SERVICE); if (connec.getNetworkInfo(0).isConnected() || connec.getNetworkInfo(1).isConnected()){ try { HttpClient httpClient = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(link); HttpParams params = httpClient.getParams(); HttpConnectionParams.setConnectionTimeout(params, 30000); HttpConnectionParams.setSoTimeout(params, 30000); HttpResponse response; response = httpClient.execute(httpGet); int statusCode = response.getStatusLine().getStatusCode(); if (statusCode == 200){ HttpEntity entity = response.getEntity(); InputStream in = null; OutputStream output = null; try{ in = entity.getContent(); String secondLevelCacheDir = context.getCacheDir() + fileName; File imageFile = new File(secondLevelCacheDir); output= new FileOutputStream(imageFile); IOUtilities.copy(in, output); output.flush(); } catch (IOException e) { Log.e("SAVING", "Could not load xml", e); } finally { IOUtilities.closeStream(in); IOUtilities.closeStream(output); dataReceived = true; } } }catch (SocketTimeoutException e){ //Handle not connecting to client !!!! Log.d("SocketTimeoutException Thrown", e.toString()); dataReceived = false; } catch (ClientProtocolException e) { //Handle not connecting to client !!!! Log.d("ClientProtocolException Thrown", e.toString()); dataReceived = false; }catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); dataReceived = false; Log.d("MalformedURLException Thrown", e.toString()); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); dataReceived = false; Log.d("IOException Thrown", e.toString()); } } return dataReceived; }

    Read the article

  • Conways Game of Life C#

    - by Darren Young
    Hi, Not sure if this is the correct place for this question or SO - mods please move if necessary. I am going to have a go at creating GoL over the weekend as a little test project : http://en.wikipedia.org/wiki/Conway's_Game_of_Life I understand the algorithm, however I just wanted to check regarding the implementation, from maybe somebody that has tried it. Essentially, my first (basic) implementation, will be a static grid at a set speed. If I understand correctly, these are the steps I will need: Initial seed Create 2d array with initial set up Foreach iteration, create temporary array, calculating each cells new state based on the Game of Life algorithm Assign temp array to proper array. Redraw grid from proper array. My concerns are over speed. When I am populating the grid from the array, would it simply be a case of looping through the array, assigning on or off to each grid cell and then redraw the grid? Am I on the correct path?

    Read the article

  • How to intercept 401 from Forms Authentication in ASP.NET MVC?

    - by Jiho Han
    I would like to generate a 401 page if the user does not have the right permission. The user requests a url and is redirected to the login page (I have deny all anonymous in web.config). The user logs in successfully and is redirected to the original url. However, upon permission check, it is determined that the user does not have the required permission, so I would like to generate a 401. But Forms Authentication always handles 401 and redirects the user to the login page. To me, this isn't correct. The user has already authenticated, the user just does not have the proper authorization. In other scenarios, such as in ajax or REST service scenario, I definitely do not want the login page - I need the proper 401 page. So far, I've tried custom Authorize filter to return ViewResult with 401 but didn't work. I then tried a normal Action Filter, overriding OnActionExecuting, which did not work either. What I was able to do is handle an event in global.asax, PostRequestHandlerExecute, and check for the permission then write out directly to response: if (permissionDenied) { Context.Response.StatusCode = 401; Context.Response.Clear(); Context.Response.Write("Permission Denied"); Context.Response.Flush(); Context.Response.Close(); return; } That works but it's not really what I want. First of all, I'm not even sure if that is the right event or the place in the pipeline to do that. Second, I want the 401 page to have a little more content. Preferably, it should be an aspx page with possibly the same master page as the rest of the site. That way, anyone browsing the site can see that the permission is denied but with the same look and feel, etc. but the ajax or service user will get the proper status code to act on. Any idea how this can be achieved? I've seen other posts with similar requests but didn't see a solution that I can use. And no, I do not want a 403.

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • Calculus? Need help solving for a time-dependent variable given some other variables.

    - by user451527
    Long story short, I'm making a platform game. I'm not old enough to have taken Calculus yet, so I know not of derivatives or integrals, but I know of them. The desired behavior is for my character to automagically jump when there is a block to either side of him that is above the one he's standing on; for instance, stairs. This way the player can just hold left / right to climb stairs, instead of having to spam the jump key too. The issue is with the way I've implemented jumping; I've decided to go mario-style, and allow the player to hold 'jump' longer to jump higher. To do so, I have a 'jump' variable which is added to the player's Y velocity. The jump variable increases to a set value when the 'jump' key is pressed, and decreases very quickly once the 'jump' key is released, but decreases less quickly so long as you hold the 'jump' key down, thus providing continuous acceleration up as long as you hold 'jump.' This also makes for a nice, flowing jump, rather than a visually jarring, abrupt acceleration. So, in order to account for variable stair height, I want to be able to calculate exactly what value the 'jump' variable should get in order to jump exactly to the height of the stair; preferably no more, no less, though slightly more is permissible. This way the character can jump up steep or shallow flights of stairs without it looking weird or being slow. There are essentially 5 variables in play: h -the height the character needs to jump to reach the stair top<br> j -the jump acceleration variable<br> v -the vertical velocity of the character<br> p -the vertical position of the character<br> d -initial vertical position of the player minus final position<br> Each timestep:<br> j -= 1.5; //the jump variable's deceleration<br> v -= j; //the jump value's influence on vertical speed<br> v *= 0.95; //friction on the vertical speed<br> v += 1; //gravity<br> p += v; //add the vertical speed to the vertical position<br> v-initial is known to be zero<br> v-final is known to be zero<br> p-initial is known<br> p-final is known<br> d is known to be p-initial minus p-final<br> j-final is known to be zero<br> j-initial is unknown<br> Given all of these facts, how can I make an equation that will solve for j? tl;dr How do I Calculus? Much thanks to anyone who's made it this far and decides to plow through this problem.

    Read the article

  • 2013 EC Elections Results

    - by Heather VanCura
    The 2013 Fall Executive Committee (EC) Elections process is now complete.  Congratulations to the following JCP Members as the new and re-elected EC Members!   We had a slight increase in JCP Member voter turnout at ~25% (up from 24% in 2012).  All Ratified candidates and the top eight Elected candidates were elected by the JCP Membership.  As part of the transition to a merged EC, Members elected in 2013 are ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term (details below). Ratified Seats: Credit Suisse, Ericsson, Freescale, Fujitsu, Gemalto M2M, Goldman Sachs, Hewlett-Packard, IBM, Intel, Nokia, Red Hat, SAP, SouJava, Software AG, TOTVS and V2COM. Open Election Seats: Eclipse Foundation, Twitter, London Java Community, CloudBees, ARM, Azul Systems, Werner Keil and MoroccoJUG. Newly elected EC Members take their seats on Tuesday, 12 November 2013.  More information is available on the JCP Elections page. Detailed Election Results Voting Period: 15 - 28 October 2013. Number of Eligible Voters: 1088 Percent of Eligible Members Casting Votes: 24.77% Ratified Seats: Candidate Yes Votes (%) No Votes (%) Abstentions Credit Suisse (2year term) 196 (84) 38 (16) 36 Ericsson (2 year term) 196 (88) 27 (12) 47 Freescale (1 year term) 151 (74) 53 (26) 66 Fujitsu (2 year term) 194 (87) 29 (13) 47 Gemalto M2M (1 year term) 170 (80) 42 (20) 58 Goldman Sachs (1 year term) 143 (64) 80 (36) 47 Hewlett-Packard (2 year term) 191 (82) 43 (18) 36 IBM (2 year term) 226 (91) 22 (9) 22 Intel (2 year term) 214 (90) 24 (10) 32 Nokia (1 year term) 139 (64) 78 (36) 53 Red Hat (2 year term) 245 (95) 12 (5) 13 SAP (1 year term) 166 (75) 56 (25) 48 SouJava (2 year term) 226 (92) 19 (8) 25 Software AG (1 year term) 167 (78) 47 (22) 56 TOTVS (1 year term) 129 (69) 59 (31) 82 V2COM (1 year term) 135 (71) 54 (29) 81 Open Election Seats: The top eight candidates have been elected; the top four receive a two year term, and the next four receive a one year term. Candidate Votes (%) Eclipse Foundation (2 year term) 221 (14) Twitter (2 year term) 203 (13) London Java Community (2 year term) 191 (12) CloudBees (2 year term) 179 (11) ARM (1 year term) 176 (11) Azul Systems (1 year term) 166 (10) Werner Keil (1 year term) 128 (8) MoroccoJUG (1 year term) 93 (6) Karan Malhi 56 (3) ChinaNanjingJUG 51 (3) JUG Joglosemar 47 (3) Viresh Wali 45 (3) ITP_JAVA 44 (3) None of the Above 3 (0)

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • EF4: ObjectContext inconsistent when inserting into a view with triggers

    - by user613567
    I get an Invalid Operation Exception when inserting records in a View that uses “Instead of” triggers in SQL Server with ADO.NET Entity Framework 4. The error message says: {"The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The key-value pairs that define an EntityKey cannot be null or empty. Parameter name: record"} @ at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() In this simplified example I created two tables, Contacts and Employers, and one view Contacts_x_Employers which allows me to insert or retrieve rows into/from these two tables at once. The Tables only have a Name and an ID attributes and the view is based on a join of both: CREATE VIEW [dbo].[Contacts_x_Employers] AS SELECT dbo.Contacts.ContactName, dbo.Employers.EmployerName FROM dbo.Contacts INNER JOIN dbo.Employers ON dbo.Contacts.EmployerID = dbo.Employers.EmployerID And has this trigger: Create TRIGGER C_x_E_Inserts ON Contacts_x_Employers INSTEAD of INSERT AS BEGIN SET NOCOUNT ON; insert into Employers (EmployerName) select i.EmployerName from inserted i where not i.EmployerName in (select EmployerName from Employers) insert into Contacts (ContactName, EmployerID) select i.ContactName, e.EmployerID from inserted i inner join employers e on i.EmployerName = e.EmployerName; END GO The .NET Code follows: using (var Context = new TriggersTestEntities()) { Contacts_x_Employers CE1 = new Contacts_x_Employers(); CE1.ContactName = "J"; CE1.EmployerName = "T"; Contacts_x_Employers CE2 = new Contacts_x_Employers(); CE1.ContactName = "W"; CE1.EmployerName = "C"; Context.Contacts_x_Employers.AddObject(CE1); Context.Contacts_x_Employers.AddObject(CE2); Context.SaveChanges(); //? line with error } SSDL and CSDL (the view nodes): <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="varchar" Nullable="false" MaxLength="50" /> <Property Name="EmployerName" Type="varchar" Nullable="false" MaxLength="50" /> </EntityType> <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> <Property Name="EmployerName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> </EntityType> The Visual Studio solution and the SQL Scripts to re-create the whole application can be found in the TestViewTrggers.zip at ftp://JulioSantos.com/files/TriggerBug/. I appreciate any assistance that can be provided. I already spent days working on this problem.

    Read the article

  • Google App Engine with local Django 1.1 gets Intermittent Failures

    - by Jon Watte
    I'm using the Windows Launcher development environment for Google App Engine. I have downloaded Django 1.1.2 source, and un-tarrred the "django" subdirectory to live within my application directory (a peer of app.yaml) At the top of each .py source file, I do this: import settings import os os.environ["DJANGO_SETTINGS_MODULE"] = 'settings' In my file settings.py (which lives at the root of the app directory, as well), I do this: DEBUG = True TEMPLATE_DIRS = ('html') INSTALLED_APPS = ('filters') import os os.environ["DJANGO_SETTINGS_MODULE"] = 'settings' from google.appengine.dist import use_library use_library('django', '1.1') from django.template import loader Yes, this looks a bit like overkill, doesn't it? I only use django.template. I don't explicitly use any other part of django. However, intermittently I get one of two errors: 1) Django complains that DJANGO_SETTINGS_MODULE is not defined. 2) Django complains that common.html (a template I'm extending in other templates) doesn't exist. 95% of the time, these errors are not encountered, and they randomly just start happening. Once in that state, the local server seems "wedged" and re-booting it generally fixes it. What's causing this to happen, and what can I do about it? How can I even debug it? Here is the traceback from the error: Traceback (most recent call last): File "C:\code\kwbudget\edit_budget.py", line 34, in get self.response.out.write(t.render(template.Context(values))) File "C:\code\kwbudget\django\template\__init__.py", line 165, in render return self.nodelist.render(context) File "C:\code\kwbudget\django\template\__init__.py", line 784, in render bits.append(self.render_node(node, context)) File "C:\code\kwbudget\django\template\__init__.py", line 797, in render_node return node.render(context) File "C:\code\kwbudget\django\template\loader_tags.py", line 71, in render compiled_parent = self.get_parent(context) File "C:\code\kwbudget\django\template\loader_tags.py", line 66, in get_parent raise TemplateSyntaxError, "Template %r cannot be extended, because it doesn't exist" % parent TemplateSyntaxError: Template u'common.html' cannot be extended, because it doesn't exist And edit_budget.py starts with exactly the lines that I included up top. All templates live in a directory named "html" in my root directory, and "html/common.html" exists. I know the template engine finds them, because I start out with "html/edit_budget.html" which extends common.html. It looks as if the settings module somehow isn't applied (because that's what adds html to the search path for templates).

    Read the article

  • Flipping OpenGL texture

    - by Mk12
    When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them? glScalef(1.0f, -1.0f, 1.0f); vertically flipping the image files manually (in Photoshop) flipping them programatically after loading them (I don't know how) This is the method I'm using to load png textures, in my Utilities.m file (Objective-C): + (GLuint)loadPngTexture:(NSString *)name { CFURLRef textureURL = CFBundleCopyResourceURL( CFBundleGetMainBundle(), (CFStringRef)name, CFSTR("png"), CFSTR("Textures")); NSAssert(textureURL, @"Texture name invalid"); CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL); NSAssert(imageSource, @"Invalid Image Path."); NSAssert((CGImageSourceGetCount(imageSource) > 0), @"No Image in Image Source."); CFRelease(textureURL); CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL); NSAssert(image, @"Image not created."); CFRelease(imageSource); NSUInteger width = CGImageGetWidth(image); NSUInteger height = CGImageGetHeight(image); void *data = malloc(width * height * 4); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSAssert(colorSpace, @"Colorspace not created."); CGContextRef context = CGBitmapContextCreate( data, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host); NSAssert(context, @"Context not created."); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGImageRelease(image); CGContextRelease(context); GLuint textureId; glGenTextures(1, &textureId); glBindTexture(GL_TEXTURE_2D, textureId); glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, data); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE_SGIS); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE_SGIS); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); free(data); return textureId; } Also, another thing I was wondering about: If I made a simple 2d game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner, or would I run in to problems with other things (e.g. text rendering)? Thanks.

    Read the article

  • Memory allocation and release for UIImage in iPhone?

    - by rkbang
    Hello all, I am using following code in iPhone to get smaller cropped image as follows: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; UIImage *tempImg = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); return [tempImg autorelease]; } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } I am using following line of code in cellForRowAtIndexPath to update the image of the cell: cell.img.image = [self imageByCropping:[self getSmallImage:[UIImage imageNamed:@"goal_image.png"]] toRect:CGRectMake(0, 0, 36, 36)]; Now when I add this table view and pop it from navigation controller, I see a memory hike.I see no leaks but memory keeps climbing. Please note that the images changes for each row and I am creating the controller using lazy initialization that is I create or alloc it whenever I need it. I saw on internet many people facing the same issue, but very rare good solutions. I have multiple views using the same way and I see almost memory raised to 4MB within 20-25 view transitions. What is the good solution to resolve this issue. tnx.

    Read the article

  • Resolving a collision between point and moving line

    - by Conundrumer
    I am designing a 2d physics engine that uses Verlet integration for moving points (velocities mentioned below can be derived), constraints to represent moving line segments, and continuous collision detection to resolve collisions between moving points and static lines, and collisions between moving/static points and moving lines. I already know how to calculate the Time of Impact for both types of collision events, and how to resolve moving point static line collisions. However, I can't figure out how to resolve moving/static point moving line collisions. Here are the initial conditions in a point and moving line collision event. We have a line segment joined by two points, A and B. At this instant, point P is touching/colliding with line AB. These points have unit mass and some might have an initial velocity, unless point P is static. The line is massless and has no explicit rotational component, since points A and B could freely move around, extending or contracting the line as a result (which will be fixed later by the constraint solver). Collision is inelastic. What are the final velocities of the points after collision?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >