Search Results

Search found 19793 results on 792 pages for 'media library'.

Page 291/792 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • The idea of functionN in Scala / Functionaljava

    - by Luke Murphy
    From brain driven development It turns out, that every Function you’ll ever define in Scala, will become an instance of an Implementation which will feature a certain Function Trait. There is a whole bunch of that Function Traits, ranging from Function1 up to Function22. Since Functions are Objects in Scala and Scala is a statically typed language, it has to provide an appropriate type for every Function which comes with a different number of arguments. If you define a Function with two arguments, the compiler picks Function2 as the underlying type. Also, from Michael Froh's blog You need to make FunctionN classes for each number of parameters that you want? Yes, but you define the classes once and then you use them forever, or ideally they're already defined in a library (e.g. Functional Java defines classes F, F2, ..., F8, and the Scala standard library defines classes Function1, ..., Function22) So we have a list of function traits (Scala), and a list of interfaces (Functional-java) to enable us to have first class funtions. I am trying to understand exactly why this is the case. I know, in Java for example, when I write a method say, public int add(int a, int b){ return a + b; } That I cannot go ahead and write add(3,4,5); ( error would be something like : method add cannot be applied to give types ) We simply have to define an interface/trait for functions with different parameters, because of static typing?

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • iTunes memory usage

    - by Jordan S. Jones
    Why does iTunes use upwards of 70 megs of ram when it is minimized to my system tray playing music? -- Update -- I understand that iTunes is a resource hog :) What I'm trying to find out, is what part of iTunes is using all that ram. Is it the music library? If I have a smaller music library, will it use less ram? Is it loading all the Album Artwork into ram for some dumb reason? Additionally, is there any recommendations on what someone could do to reduce the amount of ram it is using?

    Read the article

  • When can I publish a software tool written at work?

    - by AlexMA
    I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Songbird too damn slow, at least on my mac

    - by Cawas
    I've read on few places that Songbird is no good with more than some thousands of library items because it starts getting quite slow. Well, in my case (which is a clean install) I've imported 17k items (which I know is not that much) and instead of becoming just too slow it frequently gets to not responding for several minutes until getting back to its senses again. That's for whichever random operation such as deleting 1 item from library. I've also read on few more places that gives very little hope on fixing this issue, but I wonder... Is there any way to tweak it and make it work as fast as expected? Am I missing something or is this just a complete and utterly useless piece of software for libraries with more than 10 thousand thingies?

    Read the article

  • Setting WMI permissions remotely on windows server 2003

    - by user41507
    Hello. I am a programmer , I don know the server well. I made a simple program checking the service on the remote server is started or not. by using this(http://msdn.microsoft.com/en-us/library/dwd0y33x(v=VS.90).aspx) but the permission should be set. and I can't find any document via the internet. except one document. http://msdn.microsoft.com/en-us/library/aa393266(VS.85).aspx but the engineer say that 'tell me exactly what I do. there are many DCOM are they any nice document to show him? thanks in advance

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • connection to apache server switches sockets connection

    - by Newben
    I have just post a question but I post an other one because the problem is not the one I had in thought when asking the latter. So, I am running some rails app on osx, when I run rails s, everything works fine. If I shut down the apache server (mamp) and if I run rails s again, I have this message Can't connect to local MySQL server through socket '/Applications/MAMP/tmp/mysql/mysql.sock', which for sure is normal. For info, my mamp server is running, and the connection must pass through /Applications/MAMP/Library/bin/mysql, so I aliased it by setting in my bash profile : alias mysql="/Applications/MAMP/Library/bin/mysql" Now, when I launch a rails generate command type, I get this message : /$root/vendor/bundle/ruby/2.0.0/gems/mysql2-0.3.11/lib/mysql2/client.rb:44:in `connect': Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (Mysql2::Error) So how it can be ?

    Read the article

  • iTunes Home Sharing with VPN

    - by Philip Crumpton
    I'm trying to set up a VPN on a Windows machine that contains my iTunes library, then connect my iPhone ot it, wirelessly, using Home Sharing (remotly). I have read that this can easily be set up if the iTunes library is on a Mac (Network Beacon and YazSoft ShareTool are two products I found). I can't find anyone who has had success on a Windows machine, though. In my thinking, there are a few options to getting this done. Find a utility that takes care of this for me (like the Mac-only options listed above) and is compatible with iPhone (Hamachi is NOT compatible with iPhone VPN). Manually configure a VPN to allow Bonjour multicast (I'm not sure what this really is...) Emulate a Mac on my Windows PC. FYI my router is a Linksys WRT54GL running Tomato 1.28 Note question is related to this

    Read the article

  • iTunes + External Hard Drive problem

    - by Samwho
    Okay, so I've always stored my music on my external hard drive and run it from there with iTunes. Recently, however, it's being a little awkward. I think it was because I accidentally tried to run a song off my hard drive when it wasn't plugged in and, as would be expected, it said it couldn't find it. So I plug the hard drive in, locate the file and off it goes. But now it won't find any of the files... When I go to the "Get Info" page on the right click menu on a song I notice it has prepended file://localhost/ to everything, so my paths look like this: "file://localhost/E:/Sam/Media/Music/[song name]" I went into the iTunes Music Library.xml file and did a search and replace for file://localhost/ and replaced it with nothing and tried opening iTunes again and it just added file://localhost/ to every file again! Anyone have any idea why it does this and how to fix it without reimporting my library?

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

  • Documents stored on separate internal drive, Ubuntu doesn't notice on startup

    - by PlanoAlto
    My machine has Windows 7 Ultimate x64 and Ubuntu 12.04 LTS running side-by-side on a single hard drive with GRUB bootloader, each with 500 GB storage. I keep my personal documents on a separate 1TB hard drive so they remain isolated from any changes I make to the OS drive, but when Ubuntu starts it does not seem to notice my documents drive. While I've installed and worked with Ubuntu 12.04 Server x32 before, using it as a desktop OS is new to me. I use my documents drive for all of my personal data, including wallpapers and music, so it is imperative that Ubuntu recognize it on startup. Concerning the two specific examples: Ubuntu loads with the default blue-colored desktop instead of my desired picture of the spectacular Carina galaxy. When I right-click the desktop and select "Change Desktop Background", it wakes up from its amnesia and loads the proper background. As for my music, Rhythmbox defaults to an empty library upon reboot, forcing me to reload the settings manually each time. This gets quite tedious because I certainly can't work to my full potential without my music. The second thing I would like to address is making Ubuntu point the documents directories in ~ to their appropriate counterparts on the 1TB documents drive. I realize that this question is not new, but when I create the symbolical links, they established themselves inside the directories and did not convert the directories themselves into symbolical links. I also prefer not to move the files themselves from their current location on the 1TB drive. I believe this would also help the Rhythmbox library problem as well considering it's a default directory for the music player. Excerpt from fstab: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb6 during installation UUID=057ac83e-76ad-460d-86e5-b6d46e9b1d80 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb7 during installation #UUID=1183df90-23fc-44e4-aa17-4e7c9865d5cb none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 That's enough content for one question. I really like the Ubuntu experience so far since it doesn't treat me like an idiot out of the box (can't say the same for Windows) so I can't wait to hear from the community! Thanks for your help in advance.

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • Dom U Installation on Ubuntu 11.10

    - by sridutt
    I am trying to add a DomU Operating system on Ubuntu 11.10. I have successfully installed Xen. Verified with xm info virsh-version which returns: Compiled against library: libvir 0.9.2 Using library: libvir 0.9.2 Using API: Xen 3.0.1 Running hypervisor: Xen 4.1. Now when I tried to install Dom0 it said: unable to connect to 'localhost:8000': , in VMM. So, I followed this bug link. I could now start adding DomU. When adding DomU, in last stage, it gives the following error: Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found")' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1899, in do_install guest.start_install(False, meter=meter) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1223, in start_install noboot) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1291, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1686, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found") I tried following this bug link that said, the bug is solved in the below package. When I run ./configure in this, I am getting an error: checking for LIBXML... no checking libxml2 xml2-config >= 2.6.0 ... configure: error: Could not find libxml2 anywhere (see config.log for details). What is the problem?

    Read the article

  • Valid path to deploy vm on host

    - by ELSheepO
    I've recently added a VM to the library in SCVMM to use with MS Test Pro 2010, and run into a problem. I cannot import it using test pro, it just won't give me any option to import in the lab manager. Also, if I try to deploy it back to the host it came from, it gives me a error saying the path is not valid. Anyone have any insight into this? Also, SCVMM seems to freeze everytime I try to create a new VM from the template of the VM I've stored in the library, the same one thats giving me the problem when I try to deploy it on to the host. Thanks.

    Read the article

  • Upgrading Ubuntu(32 bit) 10.10 -> 11.04 fails and causes a kernel panic on boot

    - by Ubuntu Upgrade
    On Ubuntu 10.10 machine Upgrade to Ubuntu 11.04 using the update manager. The upgrade fails and leaves the system in an unstable state. When I reboot the system I get a kernel panic on boot. The error points to /opt/abc/runtime/lib/libc.so.6. By researching on this I found that there is a third party software abc causes problem. It has it's own runtime(libc) library. In /lib/ directory there is a link file /lib/ld-abc.so.2 ---/opt/abc/runtime/lib/ld-linux.so.2. If we rename this file to /lib/abc.so.2 or remove this file the the upgrade is success. Here is the upgrade log of where it crashes(apt-term.log) ===== Services restarted successfully. Processing triggers for libc-bin ... ldconfig deferred processing now taking place /usr/bin/dpkg: /opt/abc/runtime/lib/libc.so.6: version `GLIBC_2.11' not found (required by /usr/bin/dpkg) /usr/bin/dpkg: /opt/abc/runtime/lib/libc.so.6: version `GLIBC_2.8' not found (required by /lib/libselinux.so.1) ===== Could you please let me know what would be the problem of having a run time link library file in /lib directory. Does the ubuntu upgrade check the 3rd part runtime as well?

    Read the article

  • No Customer Left Behind

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development What does customer experience mean to you? Is it a strategy for your executives? A new buzz word and marketing term? A bunch of CRM technology with social software added on? For me, customer experience is a customer-centric worldview that produces a deeper understanding of your business and what it takes to achieve sustainable, differentiated success. It requires you to prioritize and examine the journey your customers are on with your brand, so you can answer the question, "How can we drive greater value for our business by delivering a better customer experience?" Businesses that embrace a customer-centric worldview understand their business at a much deeper level than most. They know who their customers are, what their value is, what they do, what they say, what they want, and ultimately what that means to their business. "Why Isn't Everyone Doing It?" We're all consumers who have our own experiences with many brands. Good or bad, some of those experiences stay with us. So viscerally we understand the concept of customer experience from the stories we share. One that stands out in my mind happened as I was preparing to leave for a 12-month job assignment in Europe. I wanted to put my cable television subscription on hold. I wasn't leaving for another vendor. I wasn't upset. I just had a situation where it made sense to put my $180 per month account on pause until I returned. Unfortunately, there was no way for this cable company to acknowledge that I was a loyal customer with a logical request - and to respond accordingly. So, ultimately, they lost my business. Research shows us that it costs six to seven times more to acquire a new customer than to retain an existing one. Heavily funding the efforts of getting new customers and underfunding the efforts of serving the needs of your existing (who are your greatest advocates) is a vicious and costly cycle. "Hey, These Guys Suck!" I love my Apple iPad because it's so easy to use. The explosion of these types of technologies, combined with new media channels, has raised our expectations and made us hyperaware of what's going on and what's available. In addition, social media has given us a megaphone to share experiences both positive and negative with greater impact. We are now an always-on culture that thrives on our ability to access, connect, and share anywhere anytime. If we don't get the service, product, or value we expect, it is easy to tell many people about it. We also can quickly learn where else to get what we want. Consumers have the power of influence and choice at a global scale. The businesses that understand this principle are able to leverage that power to their advantage. The ones that don't, suffer from it. Which camp are you in?Note: This is Part 1 in a three-part series. Stop back for Part 2 on November 19.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • Named my RPi 512MB @jerpi_bilbo

    - by hinkmond
    To keep our multiple Raspberry Pi boards apart from each other, I've now named my RPi Model B w/512MB: "jerpi_bilbo", which stands for Java Embedded Raspberry Pi - Bilbo (named after the Hobbit from the J.R.R. Tolkien stories). I also, set up a Twitter account for him. You can follow him at: @jerpi_bilbo He's self-tweeting, manual prompted so far (using Java Embedded 7.0 and twitter4j Java library). Works great! I'm setting him up to be automated self-tweeting soon, so watch for that... Here's a pointer to the open source twitter4j Java library: download here Just unzip and extract out the twitter4j-core-2.2.6.jar and put it on your Java Embedded classpath. Here's how @jerpi_bilbo uses it to Tweet with his Java Embedded runtime: import twitter4j.*; import java.io.* public final class Tweet { public static void main(String[] args) { String statusStr = null; if ((args.length 0) && (args[0] != null)) { statusStr = args[0]; } else { statusStr = new String("Hello World!"); } // Create new instance of the Twitter class Twitter twitter = new TwitterFactory().getInstance(); try { Status status = twitter.updateStatus(statusStr); System.out.println ("Successfully updated the status to: " + status.getText()); } catch (Exception e) { e.printStackTrace(); } } } That's all you need. Java Embedded rocks the RPi! And, @jerpi_bilbo is alive... Hinkmond

    Read the article

  • iTunes high CPU usage

    - by Calm Storm
    I upgraded to iTunes 10.4.1 and use Windows 7 and my itunes library is not that large at all (say about 20gb) When I start iTunes the CPU goes between 60-80% and stays there for a long time. I see that the itunes.exe takes about 70% of CPU in Process Explorer and it spawns a SearchProtocolHost.exe every 2 mins or so which takes < 0.1% CPU. Other than that iTunes.exe is always at 70-90% and never lets me do anything else. Does someone have a suggestion? EDIT: I have tried reinstalling 10.4.1 completely deleting my library and starting with a plain installation and that does not work I have tried downgrading to 10.3.x and that does not work either :(

    Read the article

  • iPhone VPN to iTunes Home Sharing [migrated]

    - by Philip Crumpton
    my goal is to set up a VPN on a Windows machine that contains my iTunes library then connect to that VPN with my iPhone and be able to utilize Home Sharing remotely. I have read that this can easily be set up if the iTunes library is on a Mac (Network Beacon and YazSoft ShareTool are two products I quickly found). I can't find anyone who has had success on a Windows machine, though. In my thinking, there are two options (aside from buying a Mac): 1.) Existing utility that takes care of this for me (like the Mac-only options listed above) and is compatible with iPhone (Hamachi is NOT compatible with iPhone VPN) 2.) Manually configure a VPN to allow Bonjour multicast (can't find any information on this) FYI my router is a Linksys WRT54GL running Tomato 1.28

    Read the article

  • Good resources for learning modern OpenGL (3.0 or later)?

    - by MatterGoal
    I stumble upon the search of a good resource to start with OpenGL (3.0 or later) . Well, I found a lot of books but none of them can be considered a good resource! Here two examples: OpenGL Programming Guide (7th edition) http://www.amazon.com/exec/obidos/ASIN/0321552628/khongrou-20 This is FULL of deprecated material! Almost every chapters begin with a note about that. OpenGL Superbible (5th Edition) http://www.amazon.com/exec/obidos/ASIN/0321712617/khongrou-20 This book uses a library created by the author to explain the main arguments, hiding what you want to learn! I don't want to learn how to use your library! I want to learn OpenGL! I hope that you understand this is not the same question like "hey I'm not able to use Google... tell me how to learn OpenGL". I've just finished a full and deep search but I can't find a good and complete resource to learn the "new" OpenGL avoiding deprecated topics. Can someone heading me in the right direction? I know C++ and I have 10 years of experience in development... where I can find a good resource?! I want to spend time on it and I want to learn deeply. (please feel free to edit my question, my English is terrible!)

    Read the article

  • License compatibility question

    - by Ivaylo Slavov
    I have a question regarding software licenses. I plan to put a license to a framework that I have written. My intention is that the license should be open, in order to maintain a community. Also I want to control when a new version is released and which changes will be included. The license should allow the framework to be used with commercial products, therefore respecting their own license. I have done some quick research and I decided to double license my work under the Apache License 2.0 (ASL) and Eclipse Public License (EPL). My point is that the EPL will provide me the ability to control the release cycle as well as the contributions to the project and the Apache license will take care for any patents a 3rd party might want to use in a derived work. Also both are open licenses. My question is related to the GLP and LGPL licenses. If I have the above licenses to my framework, will it be possible and legal, for someone to create a derived work of my framework, that is also a derived work of, or links a library that is under the LGPL license? Thanks in advance. EDIT: To be clear I will explain how I expect things to work. The framework will define some common API for certain functionalities as well as a Wrapper class that will invoke an implementation of that API. The Wrapper will be part of the framework, but it will internally call the actual implementation. This implementation should be in a separate library, and such libraries I would like to be developed and maintained by community. Surely the community will have to access the framework but I want to limit changes to the framework by the community but I want to provide freedom for any API implementation (a derived work of the framework). The framework will enable flexible configuration mechanisms that will tell which implementation of an API will be used.

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >