Search Results

Search found 680 results on 28 pages for 'unifying receiver'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • How to send KeyEvents through an input method service to a Dialog, or a Spinner menu?

    - by shutdown11
    I'm trying to implement an input method service that receives intents sent by a remote client, and in response to those sends an appropriate KeyEvent. I'm using in the input method service this method private void keyDownUp(int keyEventCode) { getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_DOWN, keyEventCode)); getCurrentInputConnection().sendKeyEvent( new KeyEvent(KeyEvent.ACTION_UP, keyEventCode)); } to send KeyEvents as in the Simple Sofykeyboard Sample, and it works in the home, in Activities... but it doesn't works when a Dialog or the menu of a Spinner is in foreground. The events is sent to the parent activity behind the Dialog. Is there any way to send keys and control the device like using the hardware keys from an input method? Better explanation on what I'm trying to do: I am kind of writng an Input Method that allows to control the device from remote. I write in a client (a java application on my desktop pc) a command (for example "UP"), a server on the device with sendBroadcast() sends the intent with the information, and a receiver in the input method gets it and call keyDownUp with the keycode of the DPAD_UP key. It generally works, but when I go to an app that shows a dialog, the keyDownUp method don't sends the key event to the dialog, for example for select the yes or not buttons, but it keeps to control the activty behind the Dialog.

    Read the article

  • Null Pointer Exception in my BroadcastReceiver class

    - by user1760007
    I want to search a db and toast a specific column on the startup of the phone. The app keeps crashing and getting an exception even though I feel as the code is correct. @Override public void onReceive(Context ctx, Intent intent) { Log.d("omg", "1"); DBAdapter do = new DBAdapter(ctx); Log.d("omg", "2"); Cursor cursor = do.fetchAllItems(); Log.d("omg", "3"); if (cursor.moveToFirst()) { Log.d("omg", "4"); do { Log.d("omg", "5"); String title = cursor.getString(cursor.getColumnIndex("item")); Log.d("omg", "6"); // i = cursor.getInt(cursor.getColumnIndex("id")); Toast.makeText(ctx, title, Toast.LENGTH_LONG).show(); } while (cursor.moveToNext()); } cursor.close(); } The frustrating part is that I don't see any of my "omg" logs show up in logcat. I only see when my application crashes. I get three lines of errors in logcat. 10-19 12:31:11.656: E/AndroidRuntime(1471): java.lang.RuntimeException: Unable to start receiver com.test.toaster.MyReciever: java.lang.NullPointerException 10-19 12:31:11.656: E/AndroidRuntime(1471): at com.test.toaster.DBAdapter.fetchAllItems(DBAdapter.java:96) 10-19 12:31:11.656: E/AndroidRuntime(1471): at com.test.toaster.MyReciever.onReceive(MyReciever.java:26) For anyone interested, here is my DBAdapter fetchAllItems code: public Cursor fetchAllItems() { return mDb.query(DATABASE_TABLE, new String[] { KEY_ITEM, KEY_PRIORITY, KEY_ROWID }, null, null, null, null, null); }

    Read the article

  • Need help extrapolating Java code

    - by Berlioz
    If anyone familiar with Rebecca Wirfs-Brock, she has a piece of Java code found in her book titled, Object Design: Roles, Responsibilities, and Collaborations. Here is the quote Applying Double Dispatch to a Specific Problem To implement the game Rock, Paper, Scissors we need to write code that determines whether one object “beats” another. The game has nine possible outcomes based on the three kinds of objects. The number of interactions is the cross product of the kinds of objects. Case or switch statements are often governed by the type of data that is being operated on. The object-oriented language equivalent is to base its actions on the class of some other object. In Java, it looks like this Here is the piece of Java code on page 16 ' import java.util.*; import java.lang.*; public class Rock { public static void main(String args[]) { } public static boolean beats(GameObject object) { if (object.getClass.getName().equals("Rock")) { result = false; } else if (object.getClass.getName().equals("Paper")) { result = false; } else if(object.getClass.getName().equals("Scissors")) { result = true; } return result; } }' ===This is not a very good solution. First, the receiver needs to know too much about the argument. Second, there is one of these nested conditional statements in each of the three classes. If new kinds of objects could be added to the game, each of the three classes would have to be modified. Can anyone share with me how to get this "less than optimal" piece of code to work in order to see it 'working'. She proceeds to demonstrate a better way, but I will spare you. Thanks

    Read the article

  • Any guidelines for handling the Headset and Bluetooth AVRC transport controls in Android 2.2

    - by StefanK
    I am trying to figure out what is the correct (new) approach for handling the Intent.ACTION_MEDIA_BUTTON in Froyo. In pre 2.2 days we had to register a BroadcastReceiver (either permanently or at run-time) and the Media Button events would arrive, as long as no other application intercepts them and aborts the broadcast. Froyo seems to still somewhat support that model (at least for the wired headset), but it also introduces the registerMediaButtonEventReceiver, and unregisterMediaButtonEventReceiver methods that seem to control the "transport focus" between applications. During my experiments, using registerMediaButtonEventReceiver does cause both the bluetooth and the wired headset button presses to be routed to the application's broadcast receiver (the app gets the "transport focus"), but it looks like any change in the audio routing (for example unplugging the headset) shits the focus back to the default media player. What is the logic behind the implementation in Android 2.2? What is correct way to handle transport controls? Do we have to detect the change in the audio routing and try to re-gain the focus? This is an issue that any 3rd party media player on the Android platform has to deal with, so I hope that somebody (probably a Google Engineer) can provide some guidelines that we can all follow. Having a standard approach may make headset button controls a bit more predictable for the end users. Stefan

    Read the article

  • NMEA data received but empty. Is there any secret?

    - by Roland Bertolom
    I have a tablet "Futjitsu Stylistic Q550". It's running on Windows 7 (not Phone!). It has a built-in GPS-receiver "Sierra Wireless". I need to parse NMEA data from COM-port. I can do it but it's always empty! Like "$GPRMC,,V,,,,,,,,,,N*53". I've tried standing on open space a long time (so my Android device had located me via GPS for a long time) but NMEA data still empty. So I suppose that GPS is off. But I don't know how to figure it out. I've tried send to COM port $PARAM,START,0*61 but no changes. I've tried to insert SIM-card into the device, as it was suggested on one forum but result was the same. Well is it possible that GPS is idle or something or it's just not working? And if it is idle or off how can I enable it? And.. That looks strange but GSV enumerates satellites but everyone of them has still no data e.g.: $GPGSV,4,1,16,32,,,,11,,,,23,,,*78

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • Get Object from memory using memory adresse

    - by Hamza Karmouda
    I want to know how to get an Object from memory, in my case a MediaRecorder. Here's my class: Mymic class: public class MyMic { MediaRecorder recorder2; File file; private Context c; public MyMic(Context context){ this.c=context; } private void stopRecord() throws IOException { recorder2.stop(); recorder2.reset(); recorder2.release(); } private void startRecord() { recorder2= new MediaRecorder(); recorder2.setAudioSource(MediaRecorder.AudioSource.MIC); recorder2.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); recorder2.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); recorder2.setOutputFile(file.getPath()); try { recorder2.prepare(); recorder2.start(); } catch (IllegalStateException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } } my Receiver Class: public class MyReceiver extends BroadcastReceiver { private Context c; private MyMic myMic; @Override public void onReceive(Context context, Intent intent) { this.c=context; myMic = new MyMic(c); if(my condition = true){ myMic.startRecord(); }else myMic.stopRecord(); } } So when I'm calling startRecord() it create a new MediaRecorder but when i instantiate my class a second time i can't retrieve my Object. Can i retrieve my MediaRecorder with his addresse

    Read the article

  • Matlab GUI: How to Save the Results of Functions (states of application)

    - by niko
    Hi, I would like to create an animation which enables the user to go backward and forward through the steps of simulation. An animation has to simulate the iterative process of channel decoding (a receiver receives a block of bits, performs an operation and then checks if the block corresponds to parity rules. If the block doesn't correspond the operation is performed again and the process finally ends when the code corresponds to a given rules). I have written the functions which perform the decoding process and return a m x n x i matrix where m x n is the block of data and i is the iteration index. So if it takes 3 iterations to decode the data the function returns a m x n x 3 matrix with each step is stired. In the GUI (.fig file) I put a "decode" button which runs the method for decoding and there are buttons "back" and "forward" which have to enable the user to switch between the data of recorded steps. I have stored the "decodedData" matrix and currentStep value as a global variable so by clicking "forward" and "next" buttons the indices have to change and point to appropriate step states. When I tried to debug the application the method returned the decoded data but when I tried to click "back" and "next" the decoded data appeared not to be declared. Does anyone know how is it possible to access (or store) the results of the functions in order to enable the described logic which I want to implement in Matlab GUI?

    Read the article

  • Can't open COM1 from application launched at startup

    - by n0rd
    I'm using WinLIRC with IR receiver connected to serial port COM1 on Windows 7 x64. WinLIRC is added to Startup folder (Start-All applications-Startup) so it starts every time I log in. Very often (but not all the time) I see initialization error messages from WinLIRC, which continue for some time (couple of minutes) if I retry initialization, and after some retries it initializes correctly and works fine. If I remove it from Startup and start manually at any other moment it starts without errors. I've downloaded WinLIRC sources and added MessageBox calls here and there so I can see what happens during initialization and found out that CreateFile call fails: if((hPort=CreateFile( settings.port,GENERIC_READ | GENERIC_WRITE, 0,0,OPEN_EXISTING,FILE_FLAG_OVERLAPPED,0))==INVALID_HANDLE_VALUE) { char buffer[256]; sprintf_s(buffer, "CreateFile(%s) failed with %d", settings.port, GetLastError()); MessageBox(NULL, buffer, "debug", MB_OK); hPort=NULL; return false; } I see message box saying "CreateFile(COM1) failed with 5", and 5 is an error code for "Access denied" error according to this link. So the question is why opening COM-port can fail with such error right after booting Windows and proceed normally few seconds or minutes later?

    Read the article

  • How to get a UIAlertView's result??

    - by cocos2dbeginner
    Hi, I implemented the gamekit. All works fine now. But if the user presses on send the data will instantly send to the other iphone/ipod/ipad and it will instantly written. So now i wanted to implemenr a confirm screen for the receiver. In my receiveData method (from the gamekit) i have an array. If the user presses yes the array will be written into a file.if not it wont be written into a file. #pragma mark - #pragma mark - GKreceiveData - (void) receiveData:(NSData *)data fromPeer:(NSString *)peer inSession: (GKSession *)session context:(void *)context { NSDictionary *dict = [NSKeyedUnarchiver unarchiveObjectWithData:data]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Incoming Set" message:[NSString stringWithFormat:@"%@ wants to send you a Set named: \n\n %@",[session displayNameForPeer:peer], [dict valueForKey:@"SetName"]] delegate:self cancelButtonTitle:@"Cancel" otherButtonTitles:@"OK", nil]; [alert show]; [alert release]; } - (void)alertView:(UIAlertView *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex { // the user clicked one of the OK/Cancel buttons if (buttonIndex == 0) { //NSLog(@"ok"); //this should happen if the user presses on ok on the alertview. [dataArray addObject:dict]; //i can't acess "dict" } else { //NSLog(@"cancel"); } } Do you see the problem?? What can I do??

    Read the article

  • how to create an object using self?

    - by Nick
    I thought I understood the use of self while referring to anything in the current class. After encountering this warning and subsequent run failure, I have googled many variants of "define self" or "usage of self" and gotten nowhere. This problem is how to create an object without the warning, and understand why. #import <Cocoa/Cocoa.h> @interface Foo : NSObject { Foo *obj; } -(void)beta; @end #import "Foo.h" @implementation Foo -(void)beta{ obj = [self new]; // 'Foo' may not respond to '-new' } @end Note, if I substitute Foo for self, there's no problem. I thought the class name and self were equivalent, but obviously the compiler doesn't think so. Perhaps an explanation of what's wrong here will not only solve my problem but also enlighten my understanding of the usage of self. Are there any tutorials about proper usage of self? I couldn't find anything beyond something like "self is the receiver of the message," which I didn't help me at all.

    Read the article

  • Make an abstract class or use a processor?

    - by Tim Murphy
    I'm developing a class to compare two directories and run an action when a directory/file in the source directory does not exist in destination directory. Here is a prototype of the class: public abstract class IdenticalDirectories { private DirectoryInfo _sourceDirectory; private DirectoryInfo _destinationDirectory; protected abstract void DirectoryExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void DirectoryDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileExists(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); protected abstract void FileDoesNotExist(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory); public IdenticalDirectories(DirectoryInfo sourceDirectory, DirectoryInfo destinationDirectory) { ... } public void Run() { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { this.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { this.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { this.FileExists(sourceFile, destinationFile); } else { this.FileDoesNotExist(sourceFile, destinationFile); } } } } } The above prototype is an abstract class. I'm wondering if it would be better to make the class non-abstract and have the Run method receiver a processor? eg. public void Run(IIdenticalDirectoriesProcessor processor) { foreach (DirectoryInfo sourceSubDirectory in _sourceDirectory.GetDirectories()) { DirectoryInfo destinationSubDirectory = this.GetDestinationDirectoryInfo(subDirectory); if (destinationSubDirectory.Exists()) { processor.DirectoryExists(sourceSubDirectory, destinationSubDirectory); } else { processor.DirectoryDoesNotExist(sourceSubDirectory, destinationSubDirectory); } foreach (FileInfo sourceFile in sourceSubDirectory.GetFiles()) { FileInfo destinationFile = this.GetDestinationFileInfo(sourceFile); if (destinationFile.Exists()) { processor.FileExists(sourceFile, destinationFile); } else { processor.FileDoesNotExist(sourceFile, destinationFile); } } } } What do you see as the pros and cons of each implementation?

    Read the article

  • Dynamically adding custom view to RemoteViews

    - by Naidu
    Could any help me to do this?My code is like: public CustomClass extends View { //uses ondraw() to do something } For displaying my custom view on HomeScreen i created a class to extend Broadcast Receiver public class customAppWidgetProvider extends BroadcastReceiver { public void onReceive(Context context, Intent intent) { String action = intent.getAction(); if (AppWidgetManager.ACTION_APPWIDGET_UPDATE.equals(action)) { RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.main); //Here i want to create my custom view class object and i want to add this view to linear layout in main.xml CustomClass object = new CustomClass(context) ; LinearLayout layout = new LinearLayout(context) ; layout.setLayoutParameters(new LayoutParameters(LayoutParams.WRAP_CONTENT,LayoutParams.WRAP_CONTENT)); layout.addView(object); views.addview(R.id.linearlayout, (ViewGroup) layout) ; views.setOnClickPendingIntent(R.id.analog_appwidget, PendingIntent.getActivity(context, 0, new Intent(context, AlarmClock.class), PendingIntent.FLAG_CANCEL_CURRENT)); int[] appWidgetIds = intent.getIntArrayExtra( AppWidgetManager.EXTRA_APPWIDGET_IDS); AppWidgetManager gm = AppWidgetManager.getInstance(context); gm.updateAppWidget(appWidgetIds, views); } } } But adding viewgroup to RemoteView reference is not working... above main.xml contains only LinearLayout.I want to add custom view object to it.But after running this not showing anything on screen.. please help me to do this.Thanks in Advance.

    Read the article

  • using IntentExatras with Alarm Manager

    - by Ashwin
    I want to know if this code will work(I cannot try it out right now. Moreover, I have a few doubts that have to be cleared). Intent intent = new Intent(context, AlarmReceiver.class); intent.putExtra("user",global.getUsername()); intent.puExtra("password",global.getPassword); PendingIntent sender = PendingIntent.getBroadcast(context, 192837, intent, PendingIntent.FLAG_UPDATE_CURRENT); // Get the AlarmManager service Log.v("inside log_run", "new service started"); AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE); am.setRepeating(AlarmManager.RTC_WAKEUP, IMMEDIATELY,60000,sender); finish(); As you can see, this code starts an AlarmManager with setRepeating(). If you see the intent(actually the pending intent) passed on to the BroadcastReceiver, there are two extras that are passed on. These are global variables that live as long as the Application is running. But this AlarmManager is meant to be run in the background (that is application will be alive only for the first few calls of the o fthe alrmamanager to the broadcast recevier) My Question Will AlarmManager make a copy of the global variables(the username and password) and maintain this copy to be passed along with the intent? Because, these values will be used in the broadcast receiver.

    Read the article

  • Sending a message to nil?

    - by Ryan Delucchi
    As a Java developer who is reading Apple's Objective-C 2.0 documentation: I wonder as to what sending a message to nil means - let alone how it is actually useful. Taking an excerpt from the documentation: There are several patterns in Cocoa that take advantage of this fact. The value returned from a message to nil may also be valid: If the method returns an object, any pointer type, any integer scalar of size less than or equal to sizeof(void*), a float, a double, a long double, or a long long, then a message sent to nil returns 0. If the method returns a struct, as defined by the Mac OS X ABI Function Call Guide to be returned in registers, then a message sent to nil returns 0.0 for every field in the data structure. Other struct data types will not be filled with zeros. If the method returns anything other than the aforementioned value types the return value of a message sent to nil is undefined. Has Java rendered my brain incapable of grokking the explanation above? Or is there something that I am missing that would make this as clear as glass? Note: Yes, I do get the idea of messages/receivers in Objective-C, I am simply confused about a receiver that happens to be nil.

    Read the article

  • recvfrom() return values in Stop-and-Wait UDP?

    - by mavErick
    I am trying to implement a Stop-and-Wait UDP client-server socket program in C. As known, there are basically three possible scenarios for Stop-and-Wait flow control. i.e., After transmitting a packet, the sender receives a correct ACK and thus starts transmitting the next packet; the sender receives an incorrect ACK and thus retransmits this packet; the sender receives no ACK within a TIMEOUT and thus retransmits this packet. My idea is to differentiate these three scenarios with the return value of recvfrom() on the sender side. For scenario 1&2: recvfrom() just returns the length of the received ACK. Since in my implementation the incorrect ACK is of the same length of the correct one, so I will have to go deeper and check the contents of the ACK. It's not a big deal. I know how to do. Problems come when I am trying to recognize scenario 3 where no ACK is received. What confuses me is that my recvfrom() is within a while loop, so the recvfrom() will be called constantly. What will it return when the receiver is not actually sending the sender ACK? Is it 0 or 1?

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • mythbuntu 12 - lirc device doesn't appear to even exist

    - by FrustratedWithFormsDesigner
    I'm trying to get a new installation of Mythbuntu working. So far, everything is OK except the remote. The sensor for the remote is on my Hauppauge WinTV HVR 1250. First I tried to run irw to see what was being picked up by the sensor: $ irw connect: No such file or directory Then trying to run lircd gives: $ lircd start$ lircd start lircd: can't open or create /var/run/lirc/lircd.pid I look for any lirc devices and find there are none: $ ls /dev/li* ls: cannot access /dev/li*: No such file or directory Just to be sure, I check in /proc/bus/input/devices, which shows me two powerbuttons (not sure why), kbd and mouse dev, and the audio devs. Nothing for the IR receiver on the tuner card (which I thought was strange because shouldn't the tuner show up here?). $ cat /proc/bus/input/devices I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=PNP0C0C/button/input0 S: Sysfs=/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0 U: Uniq= H: Handlers=kbd event0 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0019 Vendor=0000 Product=0001 Version=0000 N: Name="Power Button" P: Phys=LNXPWRBN/button/input0 S: Sysfs=/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1 U: Uniq= H: Handlers=kbd event1 B: PROP=0 B: EV=3 B: KEY=10000000000000 0 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input0 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.0/input/input2 U: Uniq= H: Handlers=sysrq kbd event2 B: PROP=0 B: EV=120013 B: KEY=1000000000007 ff9f207ac14057ff febeffdfffefffff fffffffffffffffe B: MSC=10 B: LED=7 I: Bus=0003 Vendor=099a Product=7202 Version=0111 N: Name="Wireless Keyboard/Mouse" P: Phys=usb-0000:00:10.1-2/input1 S: Sysfs=/devices/pci0000:00/0000:00:10.1/usb8/8-2/8-2:1.1/input/input3 U: Uniq= H: Handlers=kbd mouse0 event3 B: PROP=0 B: EV=1f B: KEY=4837fff072ff32d bf54444600000000 70001 20c100b17c000 267bfad9415fed 9e168000004400 10000002 B: REL=143 B: ABS=100000000 B: MSC=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input4 U: Uniq= H: Handlers=event4 B: PROP=0 B: EV=21 B: SW=2000 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input5 U: Uniq= H: Handlers=event5 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Rear Mic" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input6 U: Uniq= H: Handlers=event6 B: PROP=0 B: EV=21 B: SW=10 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Front Headphone" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input7 U: Uniq= H: Handlers=event7 B: PROP=0 B: EV=21 B: SW=4 I: Bus=0000 Vendor=0000 Product=0000 Version=0000 N: Name="HD-Audio Generic Line-Out" P: Phys=ALSA S: Sysfs=/devices/pci0000:00/0000:00:14.2/sound/card0/input8 U: Uniq= H: Handlers=event8 B: PROP=0 B: EV=21 B: SW=40 According to dmesg, the driver was registered, but it doesn't look like any devices was associated with the driver: $ dmesg | grep irc [ 10.631162] lirc_dev: IR Remote Control driver registered, major 249 So far, I've seen a number of forum pages suggesting that I use some trick to create a link between /dev/lirc and some other device that is the REAL IR sensor, like /dev/event5, but those cases assume that the real device is shown from /proc/bus/input/devices, and I don't see any such device there. Any suggestions on how to fix or further diagnose this?

    Read the article

  • SharePoint 2010 Hosting :: Setting Default Column Values on a Folder Programmatically

    - by mbridge
    The reason I write this post today is because my initial searches on the Internet provided me with nothing on the topic.  I was hoping to find a reference to the SDK but I didn’t have any luck.  What I want to do is set a default column value on an existing folder so that new items in that folder automatically inherit that value.  It’s actually pretty easy to do once you know what the class is called in the API.  I did some digging and discovered that class is MetadataDefaults. It can be found in Microsoft.Office.DocumentManagement.dll.  Note: if you can’t find it in the GAC, this DLL is in the 14/CONFIG/BIN folder and not the 14/ISAPI folder.  Add a reference to this DLL in your project.  In my case, I am building a console application, but you might put this in an event receiver or workflow. In my example today, I have simple custom folder and document content types.  I have one shared site column called DocumentType.  I have a document library which each of these content types registered.  In my document library, I have a folder named Test and I want to set its default column values using code.  Here is what it looks like.  Start by getting a reference to the list in question.  This assumes you already have a SPWeb object.  In my case I have created it and it is called site. SPList customDocumentLibrary = site.Lists["CustomDocuments"]; You then pass the SPList object to the MetadataDefaults constructor. MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary); Now I just need to get my SPFolder object in question and pass it to the meethod SetFieldDefault.  This takes a SPFolder object, a string with the name of the SPField to set the default on, and finally the value of the default (in my case “Memo”). SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"]; columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo"); You can set multiple defaults here.  When you’re done, you will need to call .Update(). columnDefaults.Update(); Here is what it all looks like together. using (SPSite siteCollection = new SPSite("http://sp2010/sites/ECMSource")) {     using (SPWeb site = siteCollection.OpenWeb())     {         SPList customDocumentLibrary = site.Lists["CustomDocuments"];         MetadataDefaults columnDefaults = new MetadataDefaults(customDocumentLibrary);          SPFolder testFolder = customDocumentLibrary.RootFolder.SubFolders["Test"];         columnDefaults.SetFieldDefault(testFolder, "DocumentType", "Memo");         columnDefaults.Update();     } } You can verify that your property was set correctly on the Change Default Column Values page in your list This is something that I could see used a lot on an ItemEventReceiver attached to a folder to do metadata inheritance.  Whenever, the user changed the value of the folder’s property, you could have it update the default.  Your code might look something columnDefaults.SetFieldDefault(properties.ListItem.Folder, "MyField", properties.ListItem[" This is a great way to keep the child items updated any time the value a folder’s property changes.  I’m also wondering if this can be done via CAML.  I tried saving a site template, but after importing I got an error on the default values page.  I’ll keep looking and let you know what I find out.

    Read the article

  • How to get bearable 2D and 3D performance on AMD Radeon HD 6950?

    - by l0b0
    I have had an AMD Radeon HD 6950 (i.e., Cayman series) for a couple years now, and I have tried a lot of combinations of drivers and settings with terrible results. I'm completely at a loss as to how to proceed. The open source driver has much better 2D performance, but it offloads all OpenGL rendering to the CPU. What I've tried so far: All the latest stable Ubuntu releases in the period, plus one Linux Mint release. All the latest stable AMD Catalyst Proprietary Display Drivers, and currently 13.1. The unofficial wiki installation instructions for every Ubuntu version and the semi-official Ubuntu instructions. All the tips and tweaks I could find for Minecraft (Optifine, reducing settings to minimum), VLC (postprocessing at minimum, rendering at native video size), Catalyst Control Center (flipped every lever in there) and X11 (some binary toggles I can no longer remember). Results: Typically 13-15 FPS in Minecraft, 30 max (100+ in Windows with the same driver version). Around 10 FPS in Team Fortress 2 using the official Steam client. Choppy video playback, in Flash and with VLC. CPU use goes through the roof when rendering video (150% for 1080p on YouTube in Chromium, 100% for 1080p H264 in VLC). glxgears shows 12.5 FPS when maximized. fgl_glxgears shows 10 FPS when maximized. Hardware details from lshw: Motherboard ASUS P6X58D-E CPU Intel Core i7 CPU 950 @ 3.07GHz (never overclocked; 64 bit) 6 GB RAM Video card product "Cayman PRO [Radeon HD 6950]", vendor "Hynix Semiconductor (Hyundai Electronics)" 2 x 1920x1200 monitors, both connected with HDMI. I feel I must be missing something absolutely fundamental here. Is there no accelerated support for anything on 64-bit architectures? Does a dual monitor completely mess up the driver? $ fglrxinfo display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6900 Series OpenGL version string: 4.2.11995 Compatibility Profile Context $ glxinfo | grep 'direct rendering' direct rendering: Yes I am currently using the open source driver, with the following results: Full frame rate and low CPU load when playing 1080p video. Black screen (but music in the background) in Team Fortress 2. Similar performance in Minecraft as the Catalyst driver. In hindsight obvious, since both end up offloading the rendering to the CPU. My /var/log/Xorg.0.log after upgrading to AMD Catalyst 13.1. Some possibly important lines: (WW) Falling back to old probe method for fglrx (WW) fglrx: No matching Device section for instance (BusID PCI:0@3:0:1) found The generated xorg.conf. The disabled "monitor" 0-DFP9 is actually an A/V receiver, which sometimes confuses the monitor drivers when turned on/off (but not in Windows). All three "monitor" devices are connected with HDMI. Edit: Chris Carter's suggestion to use the xorg-edgers PPA (Catalyst 13.1) resulted in some improvement, but still pretty bad performance overall: Minecraft stabilizes at 13-17 FPS, but at least the CPU load is "only" at 45-60%. Still 150% CPU use for 1080p video rendering on YouTube in Chromium. Massive improvement for 1080p H264 in VLC: 40-50% CPU use and no visible jitter glxgears performance about doubled to 25-30 FPS when maximized. fgl_glxgears still at ~10 FPS when maximized.

    Read the article

  • Errors when trying to compile the driver for the rtl8192su wireless adapter

    - by Tom Brito
    I have a wireless to usb adapter, and I'm having some trouble to install the drivers on Ubuntu. First of all, the readme says to use the make command, and I already got errors: $ make make[1]: Entering directory `/usr/src/linux-headers-2.6.35-22-generic' CC [M] /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c: In function ‘rtl8192_usb_probe’: /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12325: error: ‘struct net_device’ has no member named ‘open’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12326: error: ‘struct net_device’ has no member named ‘stop’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12327: error: ‘struct net_device’ has no member named ‘tx_timeout’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12328: error: ‘struct net_device’ has no member named ‘do_ioctl’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12329: error: ‘struct net_device’ has no member named ‘set_multicast_list’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12330: error: ‘struct net_device’ has no member named ‘set_mac_address’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12331: error: ‘struct net_device’ has no member named ‘get_stats’ /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.c:12332: error: ‘struct net_device’ has no member named ‘hard_start_xmit’ make[2]: *** [/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u/r8192U_core.o] Error 1 make[1]: *** [_module_/home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/HAL/rtl8192u] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-22-generic' make: *** [all] Error 2 /home/wellington/Desktop/rtl8192su_linux_2.4_2.6.0003.0301.2010/ is the path where I copied the drivers on my computer. Any idea how to solve this? (I don't even know what the error is...) update: sudo lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 03 serial: 78:e3:b5:e7:5f:6e size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:42 ioport:d800(size=256) memory:fbeff000-fbefffff memory:faffc000-faffffff memory:fbec0000-fbedffff *-network DISABLED description: Wireless interface physical id: 2 logical name: wlan0 serial: 00:26:18:a1:ae:64 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=802.11b/g sudo lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 02:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6315 Series Firewire Controller (rev 01) sudo lsusb Bus 002 Device 003: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 045e:00f9 Microsoft Corp. Wireless Desktop Receiver 3.1 Bus 001 Device 003: ID 0b05:1786 ASUSTek Computer, Inc. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Collaborative Organizations build Organizational Culture

    “A Collaborative organization builds its culture based on the idea of the family or an athletic team.”(Hoefling, 2001) As I grew up, I participated in many different types of clubs, civic organizations, and sports teams.  Now looking back at the more successful undertakings, I can see three commonalities amongst them. They all shared a defined purpose or goal, defined functional roles, and a shared sense of responsibility to the group. Defined Purpose or Goal In order to unit people to work together, they must share a common goal or have a common purpose. An example of this would be the Lions Club International Foundation. There purpose is to help everyone to lead healthier and more productive lives, nurtures the potential of youth, promotes health, serves the elderly, empowers the disabled and helps victims of disasters. This organization holds localized meetings across the world and works in conjunction with other localized clubs within there organization along with other organizations to promote common goals. If there are no common goals for the group, then there is nothing that binds people to the group, and nothing will be done. Defined Functional Roles In order for an organization to work and function as a team, they must have defined roles and everyone must know how their roles are interdependent on each other. Lets shed light on this subject by looking at a football team’s offense.  Each player has an assigned role to play each time the ball is snapped. The offensive line blocks for the running back or quarterback, the quarterback passes the ball to the wide receiver or hands it off to the running back and the running back and wide receivers run with the ball towards the goal line. Each member of this team shares a common goal of scoring a touchdown, but if each team member does not fulfill their assigned roles the offences will collapse and the team will lose yards. This will provide a set back to the teams goal of scoring a touchdown because they potential are then farther away from the goal line.  In addition, if all the players do not know their roles and how they are part of a larger team then even larger yard losses can occur. Shared Sense of Personal Responsibility to the Group Shared responsibility comes with the shared common goals. Each person in the organization must do their part to promote the common shared goal or purpose based on their abilities. A prime example of this is a wrestling team competing in a match. Points are awarded to the team based on how many wins the team achieves in the meet and of that how many wins where won by decision or by pin. If a wrestler pins his opponent the teams will receive 2 points for the win, but if the wrestler wins by decision, then the team only gets one point for the win. So it is the responsibility of each person on the team to not get pinned if they are unable to win the match. If the team member gets pinned then the other team receives an additional point for the win. References: Hoefling, T. (2001). Working Virtually: Managing People for Successful Virtual Teams and Organizations. Sterling, VA: Stylus Publishing, LLC.

    Read the article

  • Stack-based keyboard delay using Logitech MX3100 keyboard

    - by Mark S. Rasmussen
    I've been using a Logitech Cordless Desktop MX3100 keyboard for quite a while. I've never really had any problems, except for the occasional typo. I noticed however that I tended make the typo "Laod" instead of "Load", quite a bit more often than any other typos. As it started to get on my nerves, I decided to do some testing. What I found out was than when I write lowercase "load", I'd never make the typo. All uppercase, or just uppercase L, I'd make the typo quite often. My actual (very scientific) testing is probably best described by showing the output: moatmoatmoat MoatMoatMoat loatloatloat LaotLaotLaot loafloafloaf LaofLaofLaof hoathoathoat HoatHoatHoat hoadhoadhoad HoadHoadHoad lortlortlort LrotLrotLrot What i found out was that whenever shift was depressed, typing an uppercase "L" would induce a significant lag if the next character was an "o", compared to the lag of the any other key: High "o" lag: LoLoLoLoLoLo No "a" lag: LaLaLaLaLaLa No lag for neither "o" nor "a": lolololololo lalalalalala By realizing this I regained a slight bit of sanity as I knew I wasn't coming down with a case of Parkinsons. I was actually typing correctly, the lag just interpreted it wrongly. Now, what really bugs me is that I can't fathom how this is occurring. What I'm actually typing, in physical order, is this: L - o - a - d, and yet, the "a" is output before the "o", even though "o" was pressed before "a". So while the keyboard is processing the "Lo" combo, the "a" gets prioritized and is inserted before the "o" is done processing, resulting in Laod instead of Load. And this only happens when typing "Lo", not when typing lowercase "lo". This problem could stem from the keyboard hardware, the receiver hardware or the keyboard software driver. No matter the fault location however, I can't imagine how this could be implemented as anything but a FIFO queue. A general delay, sure, I could live with that, albeit I'd be irritated. But a lag affecting different keys differently, and even resulting in unpredictable outcome - that just doesn't make any sense. I've solved the problem by just switching to a wired keyboard. I just can't shake it off me though; what kind of bug/error/scenario would result in a case like this? Edit: It's been suggested that I stop drinking Red Bull and stick to water instead. While that may actually help solve the issue, I'm really not looking for a solution as such. I'm more interested in an explanation of how this could happen, as I can't imagine any viable technical solution that could result in this behavior.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >