Search Results

Search found 748 results on 30 pages for 'periodically'.

Page 14/30 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Using socat to exec php cli

    - by RoyHB
    There are multiple client programs that periodically connect to a port on my server and send a single line of text. When a connection to the port is made I need to start a PHP CLI script that processes the data. There may be many of the remote scripts running/connecting at more or less the same time so I think it would be best if socat forked a process for each connection to run the script. I've gotten socat to do most of what I need, using the command socat tcp-l:myport,fork exec:mypath/socatTest.php I can read the input on php://stdIn. All is good. The problem is that the process doesn't seem to fork, so if a second external program sends data while another is doing the same it gets a connection refused error. Where have I gone wrong?

    Read the article

  • Problems connecting to a linux file server from windows 7

    - by Rister
    I have an old Windows 2000 machine that I'm trying to replace because it is freezing periodically. It is used primarily for email but it does need to be connected to the two linux file servers ("dino1" and "dino2") that are in the office. When I tried to get the new Windows 7 machine to connect I can't find the user account that was being used (or I can't log on to the share). On the old machine the users all logged in as Administrator on the local machine and entered the password ("fuzzypickels") to log onto the share. To me, it seems like the username ought to be Administrator, but when I enter that with "fuzzypickels" it gives me an error that I've got either the username or the password or both incorrect. Is there something missing in my assumptions? Or is there something I can do to recover the username from the old machine?

    Read the article

  • Load on Ubuntu 8.04 LTS high

    - by Paddington
    My Ubuntu 8.04 LTS server periodically has a high load avg spike(once every 2 days) resulting in Apache timing out and virtualy everything even SSH to the server is not possible. When I am on the console and run TOP is see that The load avg increases from less than 1 to above 60 in 15 mins. How can I isolate the cause? top - 09:21:51 up 37 days, 20:18, 6 users, load average: 5.41, 5.53, 5.36 Tasks: 160 total, 2 running, 156 sleeping, 0 stopped, 2 zombie Cpu(s): 65.0%us, 8.8%sy, 0.0%ni, 1.0%id,24.6%wa, 0.3%hi, 0.3%si, 0.0%st Mem: 3989468k total, 3444984k used, 544484k free, 360460k buffers Swap: 11687248k total, 178168k used, 11509080k free, 881772k cached

    Read the article

  • Is there a way to watch EyeTV in alarm-clock style "sleep" mode on your iMac?

    - by Mark S.
    My wife likes to watch TV to go to sleep, the only trouble is that the only TV we have in our house is the iMac with the EyeTV Hybrid. I'd like to have the TV turn off after 1.5 hours of watching without changing the channels/volume--sortof like an alarm clock 'sleep' function. Do you know of a way to do this either with an EyeTV plugin or an App that might be able to try to detect such conditions and shut down the display? Right now EyeTV overrides the screensaver. The Power saver functions don't really work because she doesn't start watching at the same time every night and periodically she will want to record a 2 or 3 AM show. All I want to do is "close" (but not quit) EyeTV and shut off the display.

    Read the article

  • Gesture Based NetBeans Tip Infrastructure

    - by Geertjan
    All/most/many gestures you make in NetBeans IDE are recorded in an XML file in your user directory, "var/log/uigestures", which is what makes the Key Promoter I outlined yesterday possible. The idea behind it is for analysis to be made possible, when you periodically pass the gestures data back to the NetBeans team. See http://statistics.netbeans.org for details. Since the gestures in the 'uigestures' file are identifiable by distinct loggers and other parameters, there's no end to the interesting things that one is able to do with it. While the NetBeans team can see which gestures are done most frequently, e.g., which kinds of projects are created most often, thus helping in prioritizing new features and bugs, etc, you as the user can, depending on who and how the initiative is taken, directly benefit from your collected data, too. Tim Boudreau, in a recent article, mentioned the usefulness of hippie completion. So, imagine that whenever you use code completion, a tip were to appear reminding you about hippie completion. And then you'd be able to choose whether you'd like to see the tip again or not, etc, i.e., customize the frequency of tips and the types of tips you'd like to be shown. And then, it could be taken a step further. The tip plugin could be set up in such a way that anyone would be able to register new tips per gesture. For example, maybe you have something very interesting to share about code completion in NetBeans. So, you'd create your own plugin in which there'd be an HTML file containing the text you'd like to have displayed whenever you (or your team members, or your students, maybe?) use code completion. Then you'd register that HTML file in plugin's layer file, in a subfolder dedicated to the specific gesture that you're interested in commenting on. The same is true, not just for NetBeans IDE, but for anyone creating their applications on top of the NetBeans Platform, of course.

    Read the article

  • Time Synch Architecture in Windows Domain Environment

    - by Param
    I just read the following article -- "In a domain, time synchronization takes place when Windows Time Service turns on during system startup and periodically while the system is running." ( http://technet.microsoft.com/en-us/library/cc779145%28v=ws.10%29.aspx ) From the above article i get to know that the first sync it take as soon as i start my system, but after that in how many minutes or second or in how many periodic interval my windows client ( Window XP, window7 or window server 2008 member ) synch with my Domain controller (PDC emulator )??? Do you have any idea, and how should i verify my synch time interval? My Domain Controller is Window server 2008 R2 Standard

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • Windows doesn't recognise my USB key anymore (it used to work)

    - by dominicbri7
    I use my friend's USB flash drive (Corsair flash voyager 16gb) to transfer files from my laptop to my desktop computer. However, since a couple of days my laptop stopped recognizing the USB key.. while there is still no problems with all other computers. I use Windows 7 64 bits if that can help. I tried uninstalling the driver, rebooting and all those kind of tricks, but it won't work. When I connect it and open the "My computer" window, I see "Removable Disk (G:)" for a moment, then it disappears... then it reappears again and it keeps doing that periodically. I can't even right click then hit "Properties" because it disappears. As I recall, it DOES work on every other computers, I think it has to do with the driver but what can I do?

    Read the article

  • How do you override the warning "filename is not commonly downloaded" for a specific file?

    - by Oliver Salzburg
    There is a specific file on a customers server which I require to connect to one of their services. The contents of the file are confidential and the file is not intended for the public. Thus, the file is not "commonly downloaded", and every time I need to download it, I get this warning: I have to download that files sometimes multiple times a day (the contents of the file change periodically) and, every time, I have to click through this little annoyance. The Phishing and malware detection page only explains how to disable the feature completely, which is not what I want at all. Can I disable this feature for a single given URL?

    Read the article

  • Strategy to store/average logs of pings

    - by José Tomás Tocino
    I'm developing a site to monitor web services. The most basic type of check is sending a ping, storing the response time in a CheckLog object. By default, PingCheck objects are triggered every minute, so in one hour you get 60 CheckLogs and in one day you get 1440 CheckLogs. That's a lot of them, I don't need to store such level of detail, so I've set a up collapsing mechanism that periodically takes the uncollapsed CheckLogs older than 24h and collapses (averages) them in intervals of 30 minutes. So, if you have 360 CheckLogs that have been saved from 0:00 to 6:00, after collapsing you retain just 12 of them. The problem.. well, is this: After averaging the response times, the graph changes drastically. What can I do to improve this? Guess one option could be narrowing the interval duration to 15 min. I've seen the graphs at the GitHub status page and they do not seem to suffer from this problem. I'd appreciate any kind of information you could give me about this area.

    Read the article

  • backing up ntfs disk using rsync on ubuntu

    - by user70366
    For a long time I was using windows. I have a separate drive I use to keep copies of my media files, photos etc. on, which I periodically backup to an external drive. In Windows I used SyncToy to do this. After my Windows stopped booting, I decided to switch to Linux (Ubuntu 10.10). That seems to be going fine, but now I want to backup my drive to the external drive like before. Mostly the two drives will be already the same with maybe about 10GB of extra files added. So I try to use rsync to synchronise the two drives like this: rsync --dry-run -rvlt --modify-window=1 /media/Antonio1TB/Backup /media/FREECOM\ HDD/Backup The problem is the dry run indicates that every file on the drive will be copied. Not just the files I have recently added. What is the correct command to synch two NTFS drives under Ubuntu so that files that already exist don't get copied again? Thanks.

    Read the article

  • How to avoid Remove-Item PowerShell errors "process cannot access the file"?

    - by Michael Freidgeim
    We are using TfsDeployer and PowerShell script to remove the folders ising Remove-Item before deployment of a new version. Sometimes the PS script failed with the error Remove-Item : Cannot remove item Services\bin: The process cannot access the file Services\bin' because it is being used by another proc Get-ChildItem -Path $Destination -Recurse | Remove-Item <<<< -force -recurse + CategoryInfo : WriteError: (C:\Program File..\Services\bin:DirectoryInfo) [Remove-Item], IOException FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand I’ve tried to follow the answer from PowerShell remove force to pipe get-childitem -recurse into remove-item. get-childitem * -include *.csv -recurse | remove-item ,but the error still happens periodically. We are using unlocker to manually kill locking application, (it’s usually w3wp), but I prefer to find automated solution. Another (not ideal) option is to-suppress-powershell-errors get-childitem -recurse -force -erroraction silentlycontinue Any suggestions are welcome.

    Read the article

  • Any way to overwrite (not merge) Outlook contacts when importing from a file?

    - by Dan
    I'm trying to create a contact list for Outlook 2010 that will contain contact information for every person in my company. I intend on keeping the list current, which means I will be manually adding new employees to the contact list, and removing contacts who no longer work here. The contact list will reside in its own subfolder within the Outlook Contacts folder. I want to periodically export this contact list as a .csv file, and allow the other employees in the company to import it into Outlook on their own computer, thus providing them with a comprehensive and up-to-date company contact list. The problem is, Outlook 2010 only wants to merge contact lists, not overwrite them. This means that any contacts who are no longer with the company will not be removed from the contact lists on employee stations. Is there any way to force Outlook 2010 to overwrite the contact list? Oh how I long for the days of Outlook 2003 and its tidy .pab files.

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • Appcache and jquery mobile on a CMS powered site?

    - by user793011
    Has anyone used the cache manifest to make a CMS site work offline? I've made a demo with static html files which seems to work fine, so I'm assuming it wouldn't be too hard to achieve the same thing with a CMS. The way that you tell browsers that files have changed (and so need to be downloaded again) is by adding a comment to the cache manifest file so its byte size changes. I'm not quite sure how to do this with a CMS, but maybe some sort of server cron could run periodically? Personally I'm more interested in having a site that works offline rather than achieving ideal performance, so if the file was modified every hour rather than when content actually changed that would be fine for me. If anyone has used appcache with a CMS, has anyone done so with jquery mobile at the same time? What I'm after is a fully native feel to a site that's accessible offline, in other words I want to mimic a native App. My static demo does this perfectly with jquery mobile, so again I would have thought this would be achievable in a CMS.

    Read the article

  • DIagnosing another Windows 7 Lockup

    - by MSEoris
    Im running windows 7 on a fairly modern machine (8gb ram, amd fx-6100, gtx 560ti) and I notice that periodically my windows seems to just hang for a little while. Frequently this occurs after a cold boot and i start up five or six small to medium sized programs, but also occasionally it occurs during normal usage. Basically what occurs is the screen locks up, there is no keyboard responsiveness for a period of 30 seconds to a full minute - after a bit of patience, control is returned, but I'm interested in figuring out what is causing such lockups. I checked the event log and dont see any issues, and all i can see in task manager is a spike in cpu and memory usage right after this occurs. Any tips on how to even begin to diagnose this? Thanks.

    Read the article

  • OpeVPN log connecting client IPs

    - by TossUser
    I looking for the best solution to log all connecting client's ip to either a text file or a database who logs into my VPN server. Under the IP I mean the public WAN IP on the internet where they are connecting from. A hack could definitely be to make the openvpn server log to a separate logfile and run logtail periodically to extract the necessary information. So the database I want to build would look like: Client_Name | Client_IP | Connection_date roadwarr1 | 72.84.99.11 | 03/04/14 - 22:44:00 Sat Please don't recommend me to use the commercial Openvpn Access Server. That's not a real solution here. If the disconnection date could be determined that would be even better so I could see how long a client was connected and from where! Thank you

    Read the article

  • Sync local files to S3 similar to Robocopy

    - by Yuck
    I am looking for a way to synchronize an entire local folder structure to Amazon S3, similar to how one might synchronize two folders using Robocopy. Whatever solution I come up with needs to be scheduled to run periodically from the Windows Task Scheduler. So anything that requires a GUI to perform the synchronization is not a viable solution. Standalone Windows .EXE command line utility for Amazon S3 & EC2 looked promising, but seems to have been abandoned and would not work when I tried to use it. Possibly a difference in the way that security is handled now compared to that software's most recent release.

    Read the article

  • How can I rename files and subdirectories in a copied directory based on changes in the original?

    - by GaryF
    I have a directory structure with many hundreds of files and folders underneath it for organising files (in this case photos). I create backups of that directory structure by rsyncing it to identical copies on an external drives periodically. These drives may be offsite some of the time. I want to restructure and rename the files and directories in the original and then, later, when I have an external drive onsite, be able to run some tool that will cause these structural and naming changes to happen on the backup. If I just us rsync, it'll have to recopy much of the data to the backup drive, which I'd rather avoid due to the sizes involved. How can I get the changes I make to the original directory into the backups, as they become available, without having to recopy/rsync the data?

    Read the article

  • dynamically drawing polylines on googlemaps using php/mysql

    - by arc
    Hi. I am new to the googlemaps API. I have written a small app for my mobile phone that periodically updates its location to an SQL databse. I would like to display this information on a googlemap in my browser. Ideally i'd like to then poll the database periodically and if any new co-ords have arrived, add them to the line. Best way of describing it is this; http://tiny.cc/HEIa0 In a quest to get to there, i've started on the documents on google and been modifying them to try and acheive what I want. It doesn't work - and i don't know enough to know why. I would love some advice as to why, and any pointers towards my ultimate goal would be very much welcomed. Google Maps AJAX + MySQL/PHP Example <script type="text/javascript"> //<![CDATA[ function load() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); map.setCenter(new GLatLng(47.614495, -122.341861), 13); GDownloadUrl("phpsqlajax_genxml.php", function(data) { var xml = GXml.parse(data); var line = []; var markers = xml.documentElement.getElementsByTagName("points"); for (var i = 0; i < points.length; i++) { var point = points.item(i); var lat = point.getAttribute("lat"); var lng = point.getAttribute("lng"); var latlng = new GLatLng(lat, lng); line.push(latlng); if (point.firstChild) { var station = point.firstChild.nodeValue; var marker = createMarker(latlng, station); map.addOverlay(marker); } } var polyline = new GPolyline(line, "#ff0000", 3, 1); map.addOverlay(polyline); }); } //]]> My php file is generating the following XML; <?xml version="1.0" encoding="UTF-8" ?> <points> <point lng="-122.340141" lat="47.608940"/> <point lng="-122.344391" lat="47.613590"/> <point lng="-122.356445" lat="47.624561"/> <point lng="-122.337654" lat="47.606365"/> <point lng="-122.345673" lat="47.612823"/> <point lng="-122.340363" lat="47.605961"/> <point lng="-122.345467" lat="47.613976"/> <point lng="-122.326584" lat="47.617214"/> <point lng="-122.342834" lat="47.610126"/> </points> I have successfully worked through this; http://code.google.com/apis/maps/articles/phpsqlajax.html before attempting to customise the code. Any pointers? Where am I go wrong?

    Read the article

  • ANDROID: inside Service class, executing a method for Toast (or Status Bar notification) from schedu

    - by Peter SHINe ???
    I am trying to execute a {public void} method in Service, from scheduled TimerTask which is periodically executing. This TimerTask periodically checks a condition. If it's true, it calls method via {className}.{methodName}; However, as Java requires, the method needs to be {pubic static} method, if I want to use {className} with {.dot} The problem is this method is for notification using Toast(Android pop-up notification) and Status Bar To use these notifications, one must use Context context = getApplicationContext(); But for this to work, the method must not have {static} modifier and resides in Service class. So, basically, I want background Service to evaluate condition from scheduled TimerTask, and execute a method in Service class. Can anyone help me what's the right way to use Service, invoking a method when certain condition is satisfied while looping evaluation? Here are the actually lines of codes: The TimerTask class (WatchClipboard.java) : public class WatchClipboard extends TimerTask { //DECLARATION private static GetDefinition getDefinition = new GetDefinition(); @Override public void run() { if (WordUp.clipboard.hasText()) { WordUp.newCopied = WordUp.clipboard.getText().toString().trim().toLowerCase(); if (!(WordUp.currentCopied.equals(WordUp.newCopied))) { WordUp.currentCopied = WordUp.newCopied; Log.v(WordUp.TAG, WordUp.currentCopied); getDefinition.apiCall_Wordnik(); FetchService.instantNotification(); //it requires this method to have {static} modifier, if I want call in this way. } } } } And the Service class (FetchService.java) : If I change the modifier to static, {Context} related problems occur public class FetchService extends Service { public static final String TAG = "WordUp"; //for Logcat filtering //DECLARATION private static Timer runningTimer; private static final boolean THIS_IS_DAEMON = true; private static WatchClipboard watchClipboard; private static final long DELAY = 0; private static final long PERIOD = 100; @Override public IBinder onBind(Intent arg0) { // TODO Auto-generated method stub return null; } @Override public void onCreate() { Log.v(WordUp.TAG, "FetchService.onCreate()"); super.onCreate(); //TESTING SERVICE RUNNING watchClipboard = new WatchClipboard(); runningTimer = new Timer("runningTimer", THIS_IS_DAEMON); runningTimer.schedule(watchClipboard, DELAY, PERIOD); } @Override public void onDestroy() { super.onDestroy(); runningTimer.cancel(); stopSelf(); Log.v(WordUp.TAG, "FetchService.onCreate().stopSelf()"); } public void instantNotification() { //If I change the modifier to static, {Context} related problems occur Context context = getApplicationContext(); // application Context //use Toast notification: Need to accept user interaction, and change the duration of show Toast toast = Toast.makeText(context, WordUp.newCopied+": "+WordUp.newDefinition, Toast.LENGTH_LONG); toast.show(); //use Status notification: need to automatically expand to show lines of definitions NotificationManager mNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); int icon = R.drawable.icon; // icon from resources CharSequence tickerText = WordUp.newCopied; // ticker-text long when = System.currentTimeMillis(); // notification time CharSequence contentTitle = WordUp.newCopied; //expanded message title CharSequence contentText = WordUp.newDefinition; //expanded message text Intent notificationIntent = new Intent(this, WordUp.class); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); // the next two lines initialize the Notification, using the configurations above Notification notification = new Notification(icon, tickerText, when); notification.setLatestEventInfo(context, contentTitle, contentText, contentIntent); mNotificationManager.notify(WordUp.WORDUP_STATUS, notification); } }

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >