Search Results

Search found 2495 results on 100 pages for 'camera hacks'.

Page 92/100 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • cells order changes in UITableView after reloadSections: method

    - by user304895
    I'm having a problem using a grouped UITableView, and after 3 days of searching, I didn't found the solution... Actually, I have a Grouped table, composed of two sections : The first section contains a UISegmentedControl with three buttons : button0, button1 and button2. When clicking on the button0, I want to show two cells in the second section, with embedded UITextField in each cell. When clicking on the button1, I want to show one cell in the second section with an embedded UITextField. When clicking on the button2, I have to show the camera in ModalView (I think it will be ok). I've also put placeholders in each UITextField. Each time a button is clicked, I call a pickOne: method in order to update my view. In this method, I construct a NSIndexSet with NSRange of (1, 1), and then I call the reloadSections: method from the UITableViewController with NSIndexSet as a parameter. When the view appears for the first time, everything is ok, but when I click many times on the buttons, the order of the cells changes (cells containing the two textFields for the button0), and the new placeHolders are written over the old ones. Even more, sometimes when I click on the button0, it shows me only the second cell... Do you have any idea ? or need some code ? Thank you :)

    Read the article

  • Windows-Mobile Directshow: Specifying bitrate/quality of a WMV video capture

    - by Landstander
    Hi- I'm stumped on this, and I'm really hoping someone could point me in the right direction. I'm currently capturing video in Windows Mobile and encoding it using the WMV 9 DMO (CLSID_CWMV9EncMediaObject). That all works well enough, but the output video's bitrate is too high, resulting in a video file that's much too large for my needs. Ultimately, my goal is to mimic the video settings that Microsoft's Camera Capture Dialog outputs in the "messaging" quality mode (64kbps) from my C++ code. Currently, my code's outputting a WMV file with a bitrate of 352kbps. The only example I could find of specifying the capture bitrate with a WMV9 DMO was this. The idea in that code was basically to use a propertybag to write a bitrate to a property of the DMO. Update: In windows mobile, the closest codec property I can find that seems to equate to the bitrate is "g_wszWMVCVBRQuality". Microsoft's documentation of this property is extremely confusing to me: It basically seems to say that a higher number equates to a higher quality, but it gives absolutely no explanation of the specifics for each number. When I attempt to set this property to value like "1" via a propertybag for the WMV9 DMO, I run into a -2147467259 (unknown) error. To summarize: What is the basic strategy to specify the bitrate/quality of a video being captured via directshow (wmv9) on a windows mobile platform? I've heard (or wondered about) the following methods: Use the propertybag to change the encoder DMO's property that corresponds to bitrate/quality (currently failing) Create your own custom transcoder/encoder to specify it. This seems unnecessary since the WMV encoder works well enough- it's just at too high a bitrate. The VIDEOINFOHEADER has a bitrate property, but I suspect that specifying new settings here will do nothing to alter the actual encoding process since I wouldn't think file attributes would come into play until after the encoding. Any suggestions? PS: I would post specific source code, but at this point it may confuse more than it helps since I'm floundering so much on how to do this. At this point, I'm just trying to validate the general strategy. THANKS!

    Read the article

  • AndEngine VS Android's Canvas VS OpenGLES - For rendering a 2D indoor vector map

    - by Orchestrator
    This is a big issue for me I'm trying to figure out for a long time already. I'm working on an application that should include a 2D vector indoor map in it. The map will be drawn out from an .svg file that will specify all the data of the lines, curved lines (path) and rectangles that should be drawn. My main requirement from the map are Support touch events to detect where exactly the finger is touching. Great image quality especially when considering the drawings of curved and diagonal lines (anti-aliasing) Optional but very nice to have - Built in ability to zoom, pan and rotate. So far I tried AndEngine and Android's canvas. With AndEngine I had troubles with implementing anti-aliasing for rendering smooth diagonal lines or drawing curved lines, and as far as I understand, this is not an easy thing to implement in AndEngine. Though I have to mention that AndEngine's ability to zoom in and pan with the camera instead of modifying the objects on the screen was really nice to have. I also had some little experience with the built in Android's Canvas, mainly with viewing simple bitmaps, but I'm not sure if it supports all of these things, and especially if it would provide smooth results. Last but no least, there's the option of just plain OpenGLES 1 or 2, that as far as I understand, with enough work should be able to support all the features I require. However it seems like something that would be hard to implement. And I've never programmed in OpenGL or anything like it, but I'm willing very much to learn. To sum it all up, I need a platform that would provide me with the ability to do the 3 things I mentioned before, but also very important - To allow me to implement this feature as fast as possible. Any kind of answer or suggestion would be very much welcomed as I'm very eager to solve this problem! Thanks!

    Read the article

  • Setting corelocation results to NSNumber object parameters

    - by Dan Ray
    This is a weird one, y'all. - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { CLLocationCoordinate2D coordinate = newLocation.coordinate; self.mark.longitude = [NSNumber numberWithDouble:coordinate.longitude]; self.mark.latitude = [NSNumber numberWithDouble:coordinate.latitude]; NSLog(@"Got %f %f, set %f %f", coordinate.latitude, coordinate.longitude, self.mark.latitude, self.mark.longitude); [manager stopUpdatingLocation]; manager.delegate = nil; if (self.waitingForLocation) { [self completeUpload]; } } The latitude and longitude in that "mark" object are synthesized parameters referring to NSNumber iVars. In the simulator, my NSLog output for that line in the middle there reads: 2010-05-28 15:08:46.938 EverWondr[8375:207] Got 37.331689 -122.030731, set 0.000000 -44213283338325225829852024986561881455984640.000000 That's a WHOLE lot further East than 1 Infinite Loop! The numbers are different on the device, but similar--lat is still zero and long is a very unlikely high negative number. Elsewhere in the controller I'm accepting a button press and uploading a file (an image I just took with the camera) with its geocoding info associated, and I need that self.waitingForLocation to inform the CLLocationManager delegate that I already hit that button and once its done its deal, it should go ahead and fire off the upload. Thing is, up in the button-click-receiving method, I test see if CL is finished by testing self.mark.latitude, which seems to be getting set zero...

    Read the article

  • iPhone UIView frame animation inconsistent why?

    - by Rick
    I have an app that uses an image loaded in from an UIImagePickerController instance. Once the picker is dismissed so as to reduce the jarring transition from the picker layout to the layout of the next function I initially have the UIImageView for the image fill the whole screen and then when the picker is dismissed the image 'squeezes' up to the top left of the screen. from the initWithFrame... targetPicView = [[UIImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; [targetPicView setContentMode:UIViewContentModeScaleToFill]; this in a function called after dismissing the picker... [UIView beginAnimations:@"squeeze" context:context]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDuration:0.75]; [targetPicView setFrame:CGRectMake(20.0, 20.0, 130.0, 150.0)]; [UIView commitAnimations]; The weird thing is that this works great when the image has been chosen from the library, the view shrinks down with the top left corner in place just as I planned but... If the image comes from the camera then the view shrinks with the top right corner in place instead and appears to come in from the left side of the screen. Can anyone shed any light on this?

    Read the article

  • iPhone - Failed to save the videos metadata to the filesystem.

    - by cameron
    My application uses the UIImagePicker to allow the user to use the camera and capture a photo to edit/etc. I am getting the error message below: 2010-02-03 10:41:24.018 LivingRoom[5333:5303] Failed to save the videos metadata to the filesystem. Maybe the information did not conform to a plist. Program received signal: “EXC_BAD_ACCESS”. A search in Google brings up a number of threads in various forums, with no ultimate response/root cause/suggestions on how to fix/debug. An example is the thread below with code which is very similar to my app: http://groups.google.com/group/iphonesdkdevelopment/browse_thread/thread/6b7b396c62bef398 The error disappears for a while (10 tests in a row, no errors) if I reboot the iPhone. I have not been able to determine what makes it reoccur after a reboot, but it does. I am not using the video source and the fact that a reboot solves the problem for a while points to some sort of mem leak (perhaps?). The problem always shows up on both the iPhone (even after the reboot) and the simulator when choosing a photo from the album, but the app does not crash on the iPhone or the simulator. The same app with exact code did not have the error message when compiled using SDK 3.0 (last August/September). But 3.1.x has always produced the error message, which means that once a week or so the iPhone needs to be rebooted for the error to disappear. The users are not happy with that solution any longer!! Any suggestion/clues would be greatly appreciated.

    Read the article

  • Android: Crashed when single contact is clicked

    - by Sean Tan
    My application is always crashed at this moment, guru here please help me to solved. Thanks.The situation now is as mentioned in title above. Hereby is my AndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <!-- Copyright (C) 2009 The Android Open Source Project Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.android.contactmanager" android:versionCode="1" android:versionName="1.0"> <uses-sdk android:minSdkVersion="10" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.MANAGE_ACCOUNTS"/> <uses-permission android:name="android.permission.WRITE_OWNER_DATA"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.CALL_PHONE"/> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.GET_ACCOUNTS"/> <application android:label="@string/app_name" android:icon="@drawable/icon" android:allowBackup="true"> <!-- --><activity android:name=".ContactManager" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name="ContactAdder" android:label="@string/addContactTitle"> </activity> <activity android:name=".SingleListContact" android:label="Contact Person Details"> </activity> </application> </manifest> The SingleListContact.java package com.example.android.contactmanager; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.widget.TextView; public class SingleListContact extends Activity{ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.single_list_contact_view); TextView txtContact = (TextView) findViewById(R.id.contactList); Intent i = getIntent(); // getting attached intent data String contact = i.getStringExtra("contact"); // displaying selected product name txtContact.setText(contact); } } My ContactManager.java as below /* * Copyright (C) 2009 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.example.android.contactmanager; import android.app.Activity; import android.content.Intent; import android.database.Cursor; import android.net.Uri; import android.os.Bundle; import android.provider.ContactsContract; import android.util.Log; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Button; import android.widget.CheckBox; import android.widget.CompoundButton; import android.widget.CompoundButton.OnCheckedChangeListener; import android.widget.ListView; import android.widget.SimpleCursorAdapter; import android.widget.TextView; public final class ContactManager extends Activity implements OnItemClickListener { public static final String TAG = "ContactManager"; private Button mAddAccountButton; private ListView mContactList; private boolean mShowInvisible; //public BooleanObservable ShowInvisible = new BooleanObservable(false); private CheckBox mShowInvisibleControl; /** * Called when the activity is first created. Responsible for initializing the UI. */ @Override public void onCreate(Bundle savedInstanceState) { Log.v(TAG, "Activity State: onCreate()"); super.onCreate(savedInstanceState); setContentView(R.layout.contact_manager); // Obtain handles to UI objects mAddAccountButton = (Button) findViewById(R.id.addContactButton); mContactList = (ListView) findViewById(R.id.contactList); mShowInvisibleControl = (CheckBox) findViewById(R.id.showInvisible); // Initialise class properties mShowInvisible = false; mShowInvisibleControl.setChecked(mShowInvisible); // Register handler for UI elements mAddAccountButton.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Log.d(TAG, "mAddAccountButton clicked"); launchContactAdder(); } }); mShowInvisibleControl.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { Log.d(TAG, "mShowInvisibleControl changed: " + isChecked); mShowInvisible = isChecked; populateContactList(); } }); mContactList = (ListView) findViewById(R.id.contactList); mContactList.setOnItemClickListener(this); // Populate the contact list populateContactList(); } /** * Populate the contact list based on account currently selected in the account spinner. */ private void populateContactList() { // Build adapter with contact entries Cursor cursor = getContacts(); String[] fields = new String[] { ContactsContract.Data.DISPLAY_NAME }; SimpleCursorAdapter adapter = new SimpleCursorAdapter(this, R.layout.contact_entry, cursor, fields, new int[] {R.id.contactEntryText}); mContactList.setAdapter(adapter); } /** * Obtains the contact list for the currently selected account. * * @return A cursor for for accessing the contact list. */ private Cursor getContacts() { // Run query Uri uri = ContactsContract.Contacts.CONTENT_URI; String[] projection = new String[] { ContactsContract.Contacts._ID, ContactsContract.Contacts.DISPLAY_NAME }; String selection = ContactsContract.Contacts.IN_VISIBLE_GROUP + " = '" + (mShowInvisible ? "0" : "1") + "'"; //String selection = ContactsContract.Contacts.IN_VISIBLE_GROUP + " = '" + (mShowInvisible.get() ? "0" : "1") + "'"; String[] selectionArgs = null; String sortOrder = ContactsContract.Contacts.DISPLAY_NAME + " COLLATE LOCALIZED ASC"; return this.managedQuery(uri, projection, selection, selectionArgs, sortOrder); } /** * Launches the ContactAdder activity to add a new contact to the selected account. */ protected void launchContactAdder() { Intent i = new Intent(this, ContactAdder.class); startActivity(i); } public void onItemClick(AdapterView<?> l, View v, int position, long id) { Log.i("TAG", "You clicked item " + id + " at position " + position); // Here you start the intent to show the contact details // selected item TextView tv=(TextView)v.findViewById(R.id.contactList); String allcontactlist = tv.getText().toString(); // Launching new Activity on selecting single List Item Intent i = new Intent(getApplicationContext(), SingleListContact.class); // sending data to new activity i.putExtra("Contact Person", allcontactlist); startActivity(i); } } contact_entry.xml <?xml version="1.0" encoding="utf-8"?> <!-- Copyright (C) 2009 The Android Open Source Project Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <ListView android:layout_width="wrap_content" android:id="@+id/contactList" android:layout_height="0dp" android:padding="10dp" android:textSize="200sp" android:layout_weight="10"/> <CheckBox android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/showInvisible" android:text="@string/showInvisible"/> <Button android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/addContactButton" android:text="@string/addContactButtonLabel"/> </LinearLayout> Logcat result: 12-05 05:00:31.289: E/AndroidRuntime(642): FATAL EXCEPTION: main 12-05 05:00:31.289: E/AndroidRuntime(642): java.lang.NullPointerException 12-05 05:00:31.289: E/AndroidRuntime(642): at com.example.android.contactmanager.ContactManager.onItemClick(ContactManager.java:148) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.widget.AdapterView.performItemClick(AdapterView.java:284) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.widget.ListView.performItemClick(ListView.java:3513) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.widget.AbsListView$PerformClick.run(AbsListView.java:1812) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.os.Handler.handleCallback(Handler.java:587) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.os.Handler.dispatchMessage(Handler.java:92) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.os.Looper.loop(Looper.java:123) 12-05 05:00:31.289: E/AndroidRuntime(642): at android.app.ActivityThread.main(ActivityThread.java:3683) 12-05 05:00:31.289: E/AndroidRuntime(642): at java.lang.reflect.Method.invokeNative(Native Method) 12-05 05:00:31.289: E/AndroidRuntime(642): at java.lang.reflect.Method.invoke(Method.java:507) 12-05 05:00:31.289: E/AndroidRuntime(642): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 12-05 05:00:31.289: E/AndroidRuntime(642): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 12-05 05:00:31.289: E/AndroidRuntime(642): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Java - Save video stream from Socket to File

    - by Alex
    I use my Android application for streaming video from phone camera to my PC Server and need to save them into file on HDD. So, file created and stream successfully saved, but the resulting file can not play with any video player (GOM, KMP, Windows Media Player, VLC etc.) - no picture, no sound, only playback errors. I tested my Android application into phone and may say that in this instance captured video successfully stored on phone SD card and after transfer it to PC played witout errors, so, my code is correct. In the end, I realized that the problem in the video container: data streamed from phone in MP4 format and stored in *.mp4 files on PC, and in this case, file may be incorrect for playback with video players. Can anyone suggest how to correctly save streaming video to a file? There is my code that process and store stream data (without errors handling to simplify): // getOutputMediaFile() returns a new File object DataInputStream in = new DataInputStream (server.getInputStream()); FileOutputStream videoFile = new FileOutputStream(getOutputMediaFile()); int len; byte buffer[] = new byte[8192]; while((len = in.read(buffer)) != -1) { videoFile.write(buffer, 0, len); } videoFile.close(); server.close(); Also, I would appreciate if someone will talk about the possible "pitfalls" in dealing with the conservation of media streams. Thank you, I hope for your help! Alex.

    Read the article

  • Can't create an OgreBullet Trimesh

    - by Nathan Baggs
    I'm using Ogre and Bullet for a project and I currently have a first person camera set up with a Capsule Collision Shape. I've created a model of a cave (which will serve as the main part of the level) and imported it into my game. I'm now trying to create an OgreBulletCollisions::TriangleMeshCollisionShape of the cave. The code I've got so far is this but it isn't working. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh. Entity *cave = mSceneMgr->createEntity("Cave", "pCube1.mesh"); SceneNode *caveNode = mSceneMgr->getRootSceneNode()->createChildSceneNode(); caveNode->setPosition(0, 10, 250); caveNode->setScale(10, 10, 10); caveNode->rotate(Quaternion(0.5, 0.5, -0.5, 0.5)); caveNode->attachObject(cave); OgreBulletCollisions::StaticMeshToShapeConverter *smtsc = new OgreBulletCollisions::StaticMeshToShapeConverter(); smtsc->addEntity(cave); OgreBulletCollisions::TriangleMeshCollisionShape *tri = smtsc->createTrimesh(); OgreBulletDynamics::RigidBody *caveBody = new OgreBulletDynamics::RigidBody("cave", mWorld); caveBody->setStaticShape(tri, 0.1, 0.8); mShapes.push_back(tri); mBodies.push_back(caveBody); Any suggestions are welcome. To clarify. It compiles but the Capsule shape passes straight through the cave shape. Also I have debug outlines on and there are none being drawn around the cave mesh

    Read the article

  • Finding text orientation in image (angle for rotation)

    - by maximus
    There is an image captured by camera, and I need to find the angle of the text in order to rotate it to make the image better for OCR results. So I know that the fourier transform can be used for that purpose, My question is, does it really gives good results or may be it is better to use something different than that? Can you tell me if there is a good method for this purpose? I am afraid that not every image containing the text will give me a good result after using fourier transform method. Actually, if I make like it is written here: link text (see the part related with an example of text image) calculating the logarithm of the magnitude of the Fourier transform of image with text and then thresholding it, I get that points and I can calculate the line approximately passing through them, and after getting the line calculate the angle, and then make an affine transform, But, what if I do not get a good result every time using this method , and make a false transform? Any ideas please to judge wether the result is correct or not, or may be another method is better? The binary image can contain noise, even if there are not so much of them, the angle found as a result can be not accurate.

    Read the article

  • Internet Explorer percent based layout issue

    - by Tom
    Heya, My goal is to make a layout that is 200% width and height, with four containers of equal height and width (100% each), using no javascript as the bear minimum (or preferably no hacks). Right now I am using HTML5, and CSS display:table. It works fine in Safari 4, Firefox 3.5, and Chrome 5. I haven't tested it yet on older versions. Nonetheless, in IE7 and IE8 this layout fails completely. (I do use the Javascript HTML5 enabling script /cc../, so it should not be the use of new HTML5 tags) Here is what I have: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta charset="UTF-8" /> <title>IE issue with layout</title> <style type="text/css" media="all"> /* styles */ @import url("reset.css"); /* Generall CSS */ .table { display:table; } .row { display:table-row; } .cell { display:table-cell; } /* Specific CSS */ html, body { //overflow:hidden; I later intend to limit the viewport } section#body { position:absolute; width:200%; height:200%; overflow:hidden; } section#body .row { width:200%; height:50%; overflow:hidden; } section#body .row .cell { width:50%; overflow:hidden; } section#body .row .cell section { display:block; width:100%; height:100%; overflow:hidden; } section#body #stage0 section header { text-align:center; height:20%; display:block; } section#body #stage0 section footer { display:block; height:80%; } </style> </head> <body> <section id="body" class="table"> <section class="row"> <section id="stage0" class="cell"> <section> <header> <form> <input type="text" name="q" /> <input type="submit" value="Search" /> </form> </header> <footer> <table id="scrollers"> </table> </footer> </section> </section> <section id="stage1" class="cell"> <section> content </section> </section> </section> <section class="row"> <section id="stage2" class="cell"> <section> content </section> </section> <section id="stage3" class="cell"> <section> content </section> </section> </section> </section> </body> </html> You can see it live here: http://www.tombarrasso.com/ie-issue/

    Read the article

  • video calling (center)

    - by rrejc
    We are starting to develop a new application and I'm searching for information/tips/guides on application architecture. Application should: read the data from an external (USB) device send the data to the remote server (through internet) receive the data from the remote server perform a video call with to the calling (support) center receive a video call call from the calling (support) center support touch screens In addition: some of the data should also be visible through the web page. So I was thinking about: On the server side: use the database (probably MS SQL) use ORM (nHibernate) to map the data from the DB to the domain objects create a layer with business logic in C# create a web (WCF) services (for client application) create an asp.net mvc application (for item 7.) to enable data view through the browser On the client side I would use WPF 4 application which will communicate with external device and the wcf services on the server. So far so good. Now the problem begins. I have no idea how to create a video call (outgoing or incoming) part of the application. I believe that there is no problem to communicate with microphone, speaker, camera with WPF/C#. But how to communicate with the call center? What protocol and encoding should be used? I think that I will need to create some kind of server which will: have a list of operators in the calling center and track which operator is occupied and which operator is free have a list of connected end users receive incoming calls from end users and delegate call to free operator delegate calls from calling center to the end user Any info, link, anything on where to start would be much appreciated. Many thanks!

    Read the article

  • iPhone OpenGL and NSTimer issues

    - by Kyle
    I have an NSTimer that runs at 60hz. With an OpenGL scene loaded and rendering, my game can get 60fps, solid, all day long.. Then if I go and recompile the app, or reload it, it will get 40fps. Same resources loaded. I've been running into this problem for years, and I just want to know why. It's crazy, and I want to know if I should just abandon this stupid Timer. Conditions are not different on my 3GS between loads. It will just get 40fps sometimes. Obviously the clockrate is not different between loads, so the performance figures should be constant given a constant scene. Here is a log of my framerates: A good load: :-) FrameRate: 61 FrameRate: 61 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 61 Now, I'll go ahead and do nothing, recompile, and run: FrameRate: 43 FrameRate: 50 FrameRate: 45 FrameRate: 48 FrameRate: 40 FrameRate: 45 FrameRate: 42 FrameRate: 41 FrameRate: 42 FrameRate: 44 FrameRate: 41 FrameRate: 46 ^- Massive difference visually. What the flying heck could cause this? SAME area of the scene, SAME camera setup. No variables are different.

    Read the article

  • Expression Blend doesn't recognize comand objects declared in code behind file

    - by Brian Ensink
    I have a WPF UserControl. The code behind file declares some RoutedUICommand objects which are referenced in the XAML. The application builds and runs just fine. However Expression Blend 3 cannot load the XAML in the designer and gives errors like this one: The member "ResetCameraComand" is not recognized or accessible. The class and the member are both public. Building and rebuilding the project in Blend and restarting Blend hasn't helped. Any ideas what the problem is? Here are fragments of my XAML ... <UserControl x:Class="CAP.Visual.CameraAndLightingControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:CAP.Visual;assembly=VisualApp" Height="100" Width="700"> <UserControl.CommandBindings> <CommandBinding Command="local:CameraAndLightingControl.ResetCameraCommand" Executed="ResetCamera_Executed" CanExecute="ResetCamera_CanExecute"/> </UserControl.CommandBindings> .... ... and the code behind C# namespace CAP.Visual { public partial class CameraAndLightingControl : UserControl { public readonly static RoutedUICommand ResetCameraCommand; static CameraAndLightingControl() { ResetCameraCommand = new RoutedUICommand("Reset Camera", "ResetCamera", typeof(CameraAndLightingControl)); }

    Read the article

  • Creating a content management system for dedicated use

    - by whitstone86
    I've been trying to create a specialised CMS, as none of the current open-source ones fit my needs for this project. I did my research on Google, tried multiple times but haven't got very far with this project. I'm trying to create a CMS for a TV/episode guide which is similar to this: (for some that don't have the :// just copy-and-paste and add it after http in the URL) http library.digiguide.com/lib/programmenextshowing/Police%2C+Camera%2C+Action!-12578 (one such example) - where records expire and delete from the database after expiration. This is the design I'm trying to emulate: http library.digiguide.com/lib/programme/24-84241/Drama/ - the programme http://library.digiguide.com/lib/episode/Under+Surveillance-714873 - a typical episode, could use .htaccess to remove php from name http library.digiguide.com/lib/programmenextshowing/24-84241 - paginated episode display (using script that I found in search here possibly) I don't have access to cron job as it's on Windows/Apache, so that's out the question for this one. I'm not sure how to go about this successfully, anyone got any advice? (Note:Although the linked site runs in ASP.NET, it's the design and feel of it I'm trying to emulate, except in PHP. I've managed to emulate that site's design, but with my own tweaks to it.)

    Read the article

  • Writing a code example

    - by Stefano Borini
    I would like to have your feedback regarding code examples. One of the most frustrating experiences I sometimes have when learning a new technology is finding useless examples. I think an example as the most precious thing that comes with a new library, language, or technology. It must be a starting point, a wise and unadulterated explanation on how to achieve a given result. A perfect example must have the following characteristics: Self contained: it should be small enough to be compiled or executed as a single program, without dependencies or complex makefiles. An example is also a strong functional test if you correctly installed the new technology. The more issues could arise, the more likely is that something goes wrong, and the more difficult is to debug and solve the situation. Pertinent: it should demonstrate one, and only one, specific feature of your software/library, involving the minimal additional behavior from external libraries. Helpful: the code should bring you forward, step by step, using comments or self-documenting code. Extensible: the example code should be a small “framework” or blueprint for additional tinkering. A learner can start by adding features to this blueprint. Recyclable: it should be possible to extract parts of the example to use in your own code Easy: An example code is not the place to show your code-fu skillz. Keep it easy. helpful acronym: SPHERE. Prototypical examples of violations of those rules are the following: Violation of self-containedness: an example spanning multiple files without any real need for it. If your example is a python program, keep everything into a single module file. Don’t sub-modularize it. In Java, try to keep everything into a single class, unless you really must partition some entity into a meaningful object you need to pass around (and java mandates one class per file, if I remember correctly). Violation of Pertinency: When showing how many different shapes you can draw, adding radio buttons and complex controls with all the possible choices for point shapes is a bad idea. You de-focalize your example code, introducing code for event handling, controls initialization etc., and this is not part the feature you want to demonstrate, they are unnecessary noise in the understanding of the crucial mechanisms providing the feature. Violation of Helpfulness: code containing dubious naming, wrong comments, hacks, and functions longer than one page of code. Violation of Extensibility: badly factored code that have everything into a single function, with potentially swappable entities embedded within the code. Example: if an example reads data from a file and displays it, create a method getData() returning a useful entity, instead of opening the file raw and plotting the stuff. This way, if the user of the library needs to read data from a HTTP server instead, he just has to modify the getData() module and use the example almost as-is. Another violation of Extensibility comes if the example code is not under a fully liberal (e.g. MIT or BSD) license. Violation of Recyclability: when the code layout is so intermingled that is difficult to easily copy and paste parts of it and recycle them into another program. Again, licensing is also a factor. Violation of Easiness: Yes, you are a functional-programming nerd and want to show how cool you are by doing everything on a single line of map, filter and so on, but that could not be helpful to someone else, who is already under pressure to understand your library, and now has to understand your code as well. And in general, the final rule: if it takes more than 10 minutes to do the following: compile the code, run it, read the source, and understand it fully, it means that the example is not a good one. Please let me know your opinion, either positive or negative, or experience on this regard.

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • iPhone - ModalViewController not raising to top of the screen

    - by Oliver
    Hello, I have a UIImagePickerController that is shown [self presentModalViewController:self.picker animated:NO]; Then later on the code, I allow the user to display a preference panel : PreferencesController *nextWindow = [[[PreferencesController alloc] initWithNibName:@"Preferences" bundle:nil] autorelease]; UINavigationController* navController = [[[UINavigationController alloc] initWithRootViewController:nextWindow] autorelease]; [self presentModalViewController:navController animated:YES]; At this point, the new controller raises on the screen, but don't go to the top. Some space is left "transparent" at the top (I can see the camera view behind), and the bottom of the view is hidden out of the screen. The space I am talking about is about a status bar height. The status bar is not present on the screen. The navigation controller is hidden : self.navigationController.navigationBarHidden = YES; There is a toolbar at the top of the view. Nothing special into the view. The height of the view is defined at 480. All simulated element are set off in IB. The autoresize properties are all set on. I had a previous xib (I rebuilt it from scratch) that worked very well. I don't see what I missed on this one (I have only changed the xib, that replaces the previous one). I've cleaned the cache to be sure there was nothing left. No change... I've deleted everything in the new view to prevent some conflicts. No change... What did I miss ? How could I remove this empty space ?

    Read the article

  • Reusing Windows Picture and Fax Viewer process to load a new image from FileSystemWatcher

    - by Cory Larson
    So for an idea for my birthday party I'm setting up a photo booth. I've got software to remotely control the camera and all that, but I need to write a little application to monitor the folder where the pictures get saved and display them. Here's what I've got so far. The issue is that I don't want to launch a new Windows Photo Viewer process every time the FileSystemWatcher sees a new file, I just want to load the latest image into the current instance of the Windows Photo Viewer (or start a new one if one isn't running). class Program { static void Main(string[] args) { new Program().StartWatching(); } public void StartWatching() { FileSystemWatcher incoming = new FileSystemWatcher(); incoming.Path = @"G:\TempPhotos\"; incoming.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.FileName; incoming.Filter = "*.jpg"; incoming.Created += new FileSystemEventHandler(ShowImage); incoming.EnableRaisingEvents = true; Console.WriteLine("Press \'q\' to quit."); while (Console.Read() != 'q') ; } private void ShowImage(object source, FileSystemEventArgs e) { string s1 = Environment.ExpandEnvironmentVariables("%windir%\\system32\\rundll32.exe "); string s2 = Environment.ExpandEnvironmentVariables("%windir%\\system32\\shimgvw.dll,ImageView_Fullscreen " + e.FullPath); Process.Start(s1, s2); Console.WriteLine("{0} : Image \"{0}\" at {1:t}.", e.ChangeType, e.FullPath, DateTime.Now); } } If you don't have a tried and true solution, a simple push in the right direction would be just as valuable. And FYI, this will be running on a 64-bit Windows 7 machine. Thanks!

    Read the article

  • Trying to attach a file from SD Card to email

    - by Chrispix
    I am trying to launch an Intent to send an email. All of that works, but when I try to actually send the email a couple 'weird' things happen. here is code Intent sendIntent = new Intent(Intent.ACTION_SEND); sendIntent.setType("image/jpeg"); sendIntent.putExtra(Intent.EXTRA_SUBJECT, "Photo"); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://sdcard/dcim/Camera/filename.jpg")); sendIntent.putExtra(Intent.EXTRA_TEXT, "Enjoy the photo"); startActivity(Intent.createChooser(sendIntent, "Email:")); So if I launch using the Gmail menu context It shows the attachment, lets me type who the email is to, and edit the body & subject. No big deal. I hit send, and it sends. The only thing is the attachment does NOT get sent. So. I figured, why not try it w/ the Email menu context (for my backup email account on my phone). It shows the attachment, but no text at all in the body or subject. When I send it, the attachment sends correctly. That would lead me to believe something is quite wrong. Do I need a new permission in the Manifest launch an intent to send email w/ attachment? What am I doing wrong?

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Evidence-Based-Scheduling - are estimations only as accurate as the work-plan they're based on?

    - by Assaf Lavie
    I've been using FogBugz's Evidence Based Scheduling (for the uninitiated, Joel explains) for a while now and there's an inherent problem I can't seem to work around. The system is good at telling me the probability that a given project will be delivered at some date, given the detailed list of tasks that comprise the project. However, it does not take into account the fact that during development additional tasks always pop up. Now, there's the garbage-can approach of creating a generic task/scheduled-item for "last minute hacks" or "integration tasks", or what have you, but that clearly goes against the idea of aggregating the estimates of many small cases. It's often the case that during the development stage of a project you realize that there's a whole area your planning didn't cover, because, well, that's the nature of developing stuff that hasn't been developed before. So now your ~3 month project may very well turn into a 6 month project, but not because your estimations were off (you could be the best estimator in the world, for those task the comprised your initial work plan); rather because you ended up adding a whole bunch of new tasks that weren't there to begin with. EBS doesn't help you with that. It could, theoretically (I guess). It could, perhaps, measure the amount of work you add to a project over time and take that into consideration when estimating the time remaining on a given project. Just a thought. In other words, EBS works on a task basis, but not on a project/release basis - but the latter is what's important. It's what your boss typically cares about - delivery date, not the time it takes to finish each task along the way, and not the time it would have taken, if your planning was perfect. So the question is (yes, there's a question here, don't close it): What's your methodology when it comes to using EBS in FogBugz and how do you solve the problem above, which seems to be a main cause of schedule delays and mispredictions? Edit Some more thoughts after reading a few answers: If it comes down to having to choose which delivery date you're comfortable presenting to your higher-ups by squinting at the delivery-probability graph and choosing 80%, or 95%, or 60% (based on what, exactly?) then we've resorted to plain old buffering/factoring of our estimates. In which case, couldn't we have skipped the meticulous case by case hour-sized estimation effort step? By forcing ourselves to break down tasks that take more than a day into smaller chunks of work haven't we just deluded ourselves into thinking our planning is as tight and thorough as it could be? People may be consistently bad estimators that do not even learn from their past mistakes. In that respect, having an EBS system is certainly better than not having one. But what can we do about the fact that we're not that good in planning as well? I'm not sure it's a problem that can be solved by a similar system. Our estimates are wrong because of tendencies to be overly optimistic/pessimistic about certain tasks, and because of neglect to account for systematic delays (e.g. sick days, major bug crisis) - and usually not because we lack knowledge about the work that needs to be done. Our planning, on the other hand, is often incomplete because we simply don't have enough knowledge in this early stage; and I don't see how an EBS-like system could fill that gap. So we're back to methodology. We need to find a way to accommodate bad or incomplete work plans that's better than voodoo-multiplication.

    Read the article

  • How to draw an unfilled square on top of a stream video using a mouse and track the object enclosed

    - by Haxed
    Hi, I am making an object tracking application. I have used Emgucv 2.1.0.0 to load a video file to a picturebox. I have also taken the video stream from a web camera. Now, I want to draw an unfilled square on the video stream using a mouse and then track the object enclosed by the unfilled square as the video continues to stream. This is what people have suggested so far:- (1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester said that there are .NET wrappers, but I had a hard time finding any. (2) DxLogo sample DxLogo – A sample application showing how to superimpose a logo on a data stream. It uses a capture device for the video source, and outputs the result to a file. Sadly, this does not use a mouse. (3) GDI+ and mouse handling - this area I do not have a clue. And for tracking the object in the square, I would appreciate if someone give me some research paper links to read. Any help as to using the mouse to draw on a video is greatly appreciated. Thank you for taking the time to read this. Many Thanks

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >