Search Results

Search found 1062 results on 43 pages for 'broke artist'.

Page 35/43 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • JSON onFailure issue

    - by Mikey1980
    I’m having trouble getting a AJAX/JSON function to work correctly. I had this function grabbing value from a drop down box but now I want to use an anchor tag to set it's value. I thought it would be easy to just use the onClick event to pass string to the function I was using for the drop down box but I get the alert in JSON onFailure event even though data gets added to MySQL. I tried removing the alert from onFailure event but then it doesn't add the data. The drop down still continues work work fine, no alerts. (I should note that removing the alert also broke my drop down box) 1st I add an onClick event… <a href="<?php echo Settings::get('app.webroot'); ?>?view=schedule&action=questions" onmouseout="MM_swapImgRestore();" onmouseover="MM_swapImage('bre','','template/images/schedule/bre_f2.gif',1)" onclick="assignCallType('testing')";> 2nd I check main.js.php function assignCallType(type) { alert(type); new Request.JSON({ url: "ajax.php", onSuccess: function(rtndata,txt){ if (rtndata['STATUS'] != 'OK') alert('Status was not okay'); }, onFailure : function () { alert("onFailure") } }).get({ 'action': 'assignCallType', 'call_type': type }); } 3rd Ajax.php: the variable is back in PHP and values get added to MySQL but I get the Alert in the JSON onFailure event if ($_GET['action'] == "assignCallType") { if ($USER->isInsideSales()) { $call_type = $_GET['call_type']; $_SESSION['callinfo']->setCallType($call_type); $_SESSION['callinfo']->save($callid); echo json_encode(array('STATUS'=>'OK')); } else { echo json_encode(array('STATUS'=>'DENIED')); } } Any idea where I am going wrong. The only difference between this and the working drop down is how the function was called, I used onchange="assignCallType(this.value)".

    Read the article

  • How to cancel a deeply nested process

    - by Mystere Man
    I have a class that is a "manager" sort of class. One of it's functions is to signal that the long running process of the class should shut down. It does this by setting a boolean called "IsStopping" in class. public class Foo { bool isStoping void DoWork() { while (!isStopping) { // do work... } } } Now, DoWork() was a gigantic function, and I decided to refactor it out and as part of the process broke some of it into other classes. The problem is, Some of these classes also have long running functions that need to check if isStopping is true. public class Foo { bool isStoping void DoWork() { while (!isStopping) { MoreWork mw = new MoreWork() mw.DoMoreWork() // possibly long running // do work... } } } What are my options here? I have considered passing isStopping by reference, which I don't really like because it requires there to be an outside object. I would prefer to make the additional classes as stand alone and dependancy free as possible. I have also considered making isStopping a property, and then then having it call an event that the inner classes could be subscribed to, but this seems overly complex. Another option was to create a "Process Cancelation Token" class, similar to what .net 4 Tasks use, then that token be passed to those classes. How have you handled this situation? EDIT: Also consider that MoreWork might have a EvenMoreWork object that it instantiates and calls a potentially long running method on... and so on. I guess what i'm looking for is a way to be able to signal an arbitrary number of objects down a call tree to tell them to stop what they're doing and clean up and return.

    Read the article

  • get_bookmarks() object doesn't return link_category

    - by ninja
    I'm trying to fetch all link-categories from the built in Wordpress linklist (or bookmarks if you will). To do this, I simply stored all links in a variable like this: <?php $lists = get_bookmarks(); foreach($lists as $list) { $cats[] = $list->link_category; } ?> To my surprise even var_dump'ing $cats gave me "String(0)", so I var_dump'ed out $lists instead, and it gave me this: array(8) { [0]=> object(stdClass)#5126 (13) { ["link_id"]=> string(1) "1" ["link_url"]=> string(27) "http://codex.wordpress.org/" ["link_name"]=> string(13) "Documentation" ["link_image"]=> string(0) "" ["link_target"]=> string(0) "" ["link_description"]=> string(0) "" ["link_visible"]=> string(1) "Y" ["link_owner"]=> string(1) "1" ["link_rating"]=> string(1) "0" ["link_updated"]=> string(19) "0000-00-00 00:00:00" ["link_rel"]=> string(0) "" ["link_notes"]=> string(0) "" ["link_rss"]=> string(0) "" } Now, codex.wordpress.org is a default link that comes with wordpress, it's in a category called "Linklist", and as you can see, the object contains everything about that link, EXCEPT the categoryname. According to the codex this object should contain a field named "link_category", so I'm getting confused here. Am I missing something? Is the function broke? Regards NINJA

    Read the article

  • How do I set an og:type for the first time without losing likes?

    - by Travis
    When I first created my site, I neglected to add the Open Graph tags that Facebook recommends (http://developers.facebook.com/docs/opengraph/), and the site now has about 1200 Facebook likes through a fb:comments widget. http://graph.facebook.com/http://feedtheanimalssamples.com/ shows this: { "id": "http://feedtheanimalssamples.com/", "shares": 1204 } Recently, I've added added the following OG tags: <meta property="fb:app_id" content="59193243341" /> <meta property="og:title" content="Girl Talk - Feed The Animals Samples (old)" /> <meta property="og:image" content="http://feedtheanimalssamples.com/fta_small.png" /> <meta property="og:url" content="http://feedtheanimalssamples.com" /> But when I add the og:type tag: <meta property="og:type" content="website" /> and Lint the site, I lose all my likes. http://graph.facebook.com/http://feedtheanimalssamples.com/ starts showing this: { "id": "170545342993850", "name": "Girl Talk - Feed The Animals Samples", "picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/188039_170545342993850_3277642_s.jpg", "link": "http://www.facebook.com/pages/Girl-Talk-Feed-The-Animals-Samples/170545342993850", "category": "Website", "website": "http://feedtheanimalssamples.com/", "description": "Interactively identifies the samples in the 2008 album 'Feed The Animals' by mashup artist Girl Talk.", "likes": 1 } (Note the "likes": 1.) So: How do I set the og:type without losing my likes? I'm trying to let my likers know that I've created a new and improved site. I'm following the instuctions at http://developers.facebook.com/blog/post/397 under "Publishing to Connected Users via Graph API", but using that API apparently requires specifying an og:type. Thanks!

    Read the article

  • MVC JsonResult with the [Authorize] attribute going to Logon but not displaying the view

    - by likestoski
    I am seeing odd behavior with MVC 3 methods that return a JsonResult when used with the Authorize attribute. What looks like happens is the Authorize is correctly evaluated when I am not logged in but instead of redirecting to the logon form the Json response is the logon form. Is there an addition attribute that directs the response to not return a value but instead redirect the user to the logon form, preferebly with the correct returnUrl value? What I did as a demo was to setup a new MVC3 site and added AspNetMembership to my DB using the aspnet_regsql.exe command. All that is setup and logging me in correctly. The behavior of the JsonResult doesn't seem right and I'm hoping I have just missed an attribute to make it work properly. Any help is greatly appreciated, thanks in advance. Here is the Account Controller (leaving out the Post action which is not part of this question). public class AccountController : Controller { public ActionResult LogOn() { return View(); } [Authorize] public JsonResult AuthorizedAction() { return Json("Only returns if I am authorized"); } } Here is the Html markup: <script src="@Url.Content("~/Scripts/jquery-ui-1.8.11.min.js")" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $("#btnTest").click(function () { $.ajax({ type: "POST", url: "Account/AuthorizedAction", data: {}, success: function (result) { $("#testMe").html(result); }, error: function (result) { $("#testMe").html('Something broke in the ajax request'); } }); }); }); </script> <input type="button" id="btnTest" value="Test me" /> <div id="testMe">I have initial text</div> The Result: 1) When logged in I get 'Only returns if I am authorized' in my test div 2) When not logged and I have a break point in my Logon() method I see this value Request["returnUrl"] "/Account/AuthorizedAction" The test div I have displays the logon form :) this seems like I'm just not handling this properly.

    Read the article

  • Android 2.1 gallery not backward compatible with Cupcake version, now what?

    - by Schermvlieger
    I don't know why, but in Eclair, the default (non-fancy) gallery app changed its begaviour from the Cupcake version, and it broke one of my commercial applications :-( Firstly, when long-pressing a gallery and choosing "Diashow", it does not publish an Intent to be picked up by any application that implements the Intent filter anymore. Instead, it will directly call "com.android.gallery/com.android.camera.ViewImage" with extras. Question: is it still possible to intercept this intent and allow the user to choose my application to do the Diashow? Secondly, the intent extras for the VIEW intent are messed up (in my build of 2.1 anyway): Instead of providing the BucketId of the picture in the Intent's queryparameter. But in 2.1, the BucketId is moved to the Intent's extras. Except; it is not passing the BUCKET_ID, but the unlocalized BUCKET_DISPLAY_NAME instead :-/ Question: how can I still get the unique BUCKET_ID from the intent, so that I do not have to work with a potentially non-unique BUCKET_DISPLAY_NAME? Is there anybody out there who has come up with a working solution for these problems? I thought the whole idea of Android Intents was to be able to integrate your applications with the base Android environment, but my build of 2.1 proves that this idea still lives in the land of Theory :-(

    Read the article

  • how to pass value to controller??

    - by rajesh
    When I try to pass url value to controller action, action is not getting the required value. I'm sending the value like this: function value(url,id) { alert(url); document.getElementById('rating').innerHTML=id; var params = 'artist='+id; alert(params); // var newurl='http://localhost/songs_full/public/eslresult/ratesong/userid/1/id/27'; var myAjax = new Ajax.Request(newurl,{method: 'post',parameters:params,onComplete: loadResponse}); //var myAjax = new Ajax.Request(url,{method:'POST',parameters:params,onComplete: load}); //alert(myAjax); } function load(http) { alert('success'); } and in the controller I have: public function ratesongAction() { $user=$_POST['rating']; echo $user; $post= $this->getRequest()->getPost(); //echo $post; $ratesongid= $this->_getParam('id'); } But still not getting the result. I am using zend framework.

    Read the article

  • Saving an ActiveRecord non-transactionally.

    - by theFunkyEngineer
    My application accepts file uploads, with some metadata being stored in the DB, and the file itself on the file system. I am trying to make the metadata visible in the application before the file upload and post-processing are finished, but because saves are transactional, I have had no success. I have tried the callbacks and calling create_or_update() instead of save(), all to no avail. Is there a way to do this without re-writing the guts of ActiveRecord::Base? I've even attempted naming the method make() instead of save(), but perplexingly that had no effect. The code below "works" fine, but the database is not modified until everything else is finished. def save(upload) uploadFile = upload['datafile'] originalName = uploadFile.original_filename self.fileType = File.extname(originalName) create_or_update() # write the file File.open(self.filePath, "wb") { |f| f.write(uploadFile.read) } begin musicFile = TagLib::File.new(self.filePath()) self.id3Title = musicFile.title self.id3Artist = musicFile.artist self.id3Length = musicFile.length rescue TagLib::BadFile => exc logger.error("Failed to id track: \n #{exc}") end if(self.fileType == '.mp3') convertToOGG(); end create_or_update() end Any ideas would be quite welcome, thanks.

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • MySQL problem reconnecting (mysqld.exe) keeps giving error...

    - by Ronedog
    Need some guidance figuring out what went wrong. I've been using mysql, phpmyadmin for just under a year on my home computer while I develop a webapp. 3 days ago I updated my windows vista with all the "wonderful" microsoft updates, security patches, etc...and now it's broke. I tried uninstalling all the upgrades, but there are 4 of them I can't unistall because microsoft says their "operating system" updates and can't be unistalled. My system is: windows vista, php 5+, mysql 5.1, Apache 2+. I can run my web app and it queries the database without any problems. However, when I run phpmyadmin to get into the database I get an error: "mysqld.exe has stopped working" and phpmyadmin crashes. I tried going to the command line for mysql to do a mysqldump to backup my database and it gives me an error "2013, could not connect to the server". If I restart the computer the webapp will work again. Basically, php can query the database, but if I try to get at the database through phpmyadmin, or the command prompt the mysqld.exe error occurs and blows mysql out. Any ideas what's going on here? Any ideas how to get around this to backup the db, so I can reinstall mysql?. I'm really lost where to start. I don't really know if the updates caused the problem, or if the 4 updates that can't be unistalled are really the problem. Any tips will be appreciated. thanks.

    Read the article

  • Strange offset space between <button> as parent container and <div> as child.

    - by Maxja
    I need to decorate a standard html button. The base element I took <button> instead of <input>, cos I decided that the element must be a parent container. And there is child element <div> in it. This <div> element will be been the core element for decoration, and should occupy the entire space of the parent element - button. <button> <div>*decoration goes here*</div> </button> And within Cascading Style Sheets it might be looks like this: css button { margin: 0; border: 0; padding: 0; width: *150*px; height: *50*px; position: relative; } div { margin: 0; border: 0; padding: 0; width: 100%; height: 100%; background: *black*; position: absolute; top: 0; left: 0; } html <button type="button"> <div>*decoration goes here*</div> </button> So, Opera(10) is doing well, webkits Chrome(6) and Safari(4) is doing also well, but Internet Explorer(8) is bad - DOM Inspector shows some strange Offset space in top and left, FireFox(3) is also bad - DOM Inspector shows that <div> get some negative value in css-property right: and bottom:. Even if this property will set to zero(0) DOM-Inspector still shows same negative value. I almost broke my brain. Help me, solve this problem, please!

    Read the article

  • Forms blank when rendering a partial when using a collection of objects. Help!

    - by dustmoo
    Alright, I know my title is a little obscure but it best describes the problem I am having. Essentially, I have a list of users, and want to be able to edit their information in-line using AJAX. Since the users are showing up in rows, I am using a partial to render the data and the forms (which will be hidden initially by the ajax), however, when the rows are rendered currently only the last item has it's form's fields populated. I suspect this has something to do with the fact that all the form fields have the same id's and it is confusing the DOM. But I don't know how to make sure the id's are unique. Here is a small example: In my view: <%= render :partial => 'shared/user', :collection => @users %> My partial (broke down to just the form) note that I am using the local variable "user" <% form_for user, :html => {:multipart => true} do |f| -%> <%= f.label :name, "Name*" %> <%= f.text_field :title, :class => "input" %> <%= f.label :Address, "Address" %> <%= f.text_field :address, :class => "input" %> <%= f.label :description, "Description*" %> <%= f.text_area :description, :class => "input" %> <% end -%> When the html is rendered each form has a unique id (for the id of the user) but the elements themselves all have the same id, and only the last user form is actually getting populated with values. Does anyone have any ideas?? :) Thanks in advance!

    Read the article

  • My MYSQL & PHP while code is out of memory, when there is only one row

    - by Sam
    Hey all, why is this code throwing an out of memory error, when there is only 1 row in the database.. $request_db = mysql_query("SELECT * FROM requests WHERE haveplayed='0'") or die(mysql_error()); $request = mysql_fetch_array( $request_db ); echo "<table border=\"1\" align=\"center\">"; while ( $request['haveplayed'] == "0" ) { echo "<tr><td>"; echo $request['SongName']; echo "</td><td>"; echo "<tr><td>"; echo $request['Artist']; echo "</td><td>"; echo "<tr><td>"; echo $request['DedicatedTo']; echo "</td><td>"; } echo "</table>"; Thanks.

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • Insert names into table if not already existing

    - by John
    I am trying to build an application that allows users to submit names to be added to a db table (runningList). Before the name is added tho, I would like to check firstname and lastname against another table (controlList). It must exist in (controlList) or reply name not valid. If the name is valid I would like to then check firstname and lastname against (runningList) it make sure it isnt already there. If it isnt there, then insert firstname & lastname. If it is there, reply name already used. Below is code that worked before I tried to add the test against the controlList, I somehow broke it altogether while trying to add the extra step. if (mysql_num_rows(mysql_query('SELECT * FROM runninglist WHERE firstname=\'' .$firstname . '\' AND lastname=\'' . $lastname . '\' LIMIT 1')) == 0) { $sql="INSERT INTO runninglist (firstname, lastname) VALUES ('$_POST[firstname]','$_POST[lastname]')"; } else { echo "This name has been used."; } Any direction would be appreciated. Thanx John

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • Substitute User Controls on Failure

    - by Brian
    Recently, I had a user control I was developing throw an exception. I know what caused the exception, but this issue got me thinking. If I have a user control throw an exception for whatever reason and I wish to replace that usercontrol with something else (e.g. an error saying, "Sorry, this part of the page broke.") and perhaps log the error, what would be a good way to do it that could be done independently of what the user control is or does (i.e. I'm not saying what the user control does/is, because I want an answer where that is irrelevant). Code sample: <asp:TableRow VerticalAlign="Top" HorizontalAlign="Left"> <asp:TableCell> <UR:MyUserControl ID="MyUserControl3" runat="server" FormatString="<%$ AppSettings:RVUC %>" ConnectionString="<%$ ConnectionStrings:WPDBC %>" Title="CO" /> </asp:TableCell> <asp:TableCell> <UR:MyUserControl ID="MyUserControl4" runat="server" FormatString="<%$ AppSettings:RVUA %>" ConnectionString="<%$ ConnectionStrings:WPDBA %>" Title="IEAO" /> </asp:TableCell> </asp:TableRow>

    Read the article

  • Is this the correct way to set up has many with multiple associations?

    - by user323763
    I'm trying to set up a new project for a music site. I'm learning ROR and am a bit confused about how to make join models/tables. Does this look right? I have users, playlists, songs, and comments. Users can have multiple playlists. Users can have multiple comments on their profile. Playlists can have multiple songs. Playlists can have comments. Songs can have comments. class CreateTables < ActiveRecord::Migration def self.up create_table :users do |t| t.string :login t.string :email t.string :firstname t.string :lastname t.timestamps end create_table :playlists do |t| t.string :title t.text :description t.timestamps end create_table :songs do |t| t.string :title t.string :artist t.string :album t.integer :duration t.string :image t.string :source t.timestamps end create_table :comments do |t| t.string :title t.text :body t.timestamps end create_table :users_playlists do |t| t.integer :user_id t.integer :playlist_id t.timestamps end create_table :playlists_songs do |t| t.integer :playlist_id t.integer :song_id t.integer :position t.timestamps end create_table :users_comments do |t| t.integer :user_id t.integer :comment_id t.timestamps end create_table :playlists_comments do |t| t.integer :playlist_id t.integer :comment_id t.timestamps end create_table :songs_comments do |t| t.integer :song_id t.integer :comment_id t.timestamps end end def self.down drop_table :playlists drop_table :comments drop_table :songs_comments drop_table :users_comments drop_table :users_playlists drop_table :users drop_table :playlists drop_table :songs drop_table :playlists end end

    Read the article

  • Expressjs route param as variable in main app

    - by MoDFoX
    For my app I have two route set up, app.get('/', routes.index); app.get('/:name', routes.index); I would like it to be so that if I don't specify a param, say just go to appurl.com (localhost:3000), it would load a default user, but if I do specify a param(localhost:3000/user), use that as the variable "username" in the following function (placed after my routes). (function getUser(){ var body = '', username = 'WillsonSM', options = { host: 'ws.audioscrobbler.com', port: 80, path: '/2.0/?method=user.gettopartists&user=' + username + '&format=json&limit=20&api_key=APIKEYGOESHERE' }; require('http').request(options, function(res) { res.setEncoding('utf8'); res.on('data', function(chunk) { body += chunk; }); res.on('end', function() { body = JSON.parse(body); artists = body.topartists.artist; }); }).end(); })(); Along with this I have my route set up like so: exports.index = function(req, res){ res.render('index', { title: 'LasTube' }); username = req.params.name; console.log(username); }; unfortunately setting username there to req.params.name does not seem to be accessible from the main app function. My question is: How can I set expressjs/nodejs to use the parameter set via /name when available, and just use a default - in this example "WillsonSM" if not available. I've tried taking "username" out of the main app, and just leaving it in the function, but username becomes undefined, as it is inaccessible from the route, and the app will not run. I can spit out "username" via the routes console.log, so assigning it there is not an issue, but as I am new to expressjs, I am unaware of how I should go about doing this. I have tried all I can think of and find from looking around the internet. Also, if there is a better way of doing this, or I am doing something wrong, please let me know. If I've left out any information, just throw in a comment and I'll try to address it.

    Read the article

  • Add New Features to WMP with Windows Media Player Plus

    - by DigitalGeekery
    Do you use Windows Media Player 11 or 12 as your default media player? Today, we’re going to show you how to add some handy new features and enhancements with the Windows Media Player Plus third party plug-in. Installation and Setup Download and install Media Player Plus! (link below). You’ll need to close out of Windows Media Player before you begin or you’ll receive the message below. The next time you open Media Player you’ll be presented with the Media Player Plus settings window. Some of the settings will be enabled by default, such as the Find as you type feature. Using Media Player Plus! Find as you type allows you to start typing a search term from anywhere in Media Player without having to be in the Search box. The search term will automatically fill in the search box and display the results.   You’ll also see Disable group headers in the Library Pane.   This setting will display library items in a continuous list similar to the functionality of Windows Media Player 10. Under User Interface you can enable displaying the currently playing artist and title in the title bar. This is enabled by default.   The Context Menu page allows you to enable context menu enhancements. The File menu enhancement allows you to add the Windows Context menu to Media Player on the library pane, list pane, or both. Right click on a Title, select File, and you’ll see the Windows Context Menu. Right-click on a title and select Tag Editor Plus. Tag Editor Plus allows you to quickly edit media tags.   The Advanced tab displays a number of tags that Media Player usually doesn’t show. Only the tags with the notepad and pencil icon are editable.   The Restore Plug-ins page allows you to configure which plug-ins should be automatically restored after a Media Player crash. The Restore Media at Startup page allows you to configure Media Player to resume playing the last playlist, track, and even whether it was playing or paused at the time the application was closed. So, if you close out in the middle of a song, it will begin playing from that point the next time you open Media Player. You can also set Media Player to rewind a certain number of seconds from where you left off. This is especially useful if you are in the middle of watching a movie. There’s also the option to have your currently playing song sent to Windows Live Messenger. You can access the settings at any time by going to Tools, Plug-in properties, and selecting Windows Media Player Plus. Windows Media Plus is a nice little free plug-in for WMP 11 and 12 that brings a lot of additional functionality to Windows Media Player. If you use Media Player 11 or WMP 12 in Windows 7 as your main player, you might want to give this a try. Download Windows Media Player Plus! Similar Articles Productive Geek Tips Install and Use the VLC Media Player on Ubuntu LinuxFixing When Windows Media Player Library Won’t Let You Add FilesMake VLC Player Look like Windows Media Player 10Make VLC Player Look like Windows Media Player 11Make Windows Media Player Automatically Open in Mini Player Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7

    Read the article

  • Create Music Playlists in Windows 7 Media Center

    - by DigitalGeekery
    One of the new features in Windows 7 Media Center is the ability to easily create music playlists without using Media Player. Today we’ll take a closer look at how to create them directly in Media Center. Create Manual Playlists Open Windows Media Center and select the Music Library. From within the Music Library, choose playlists from the top menu.   Then select Create Playlist. Give your new playlist a name, and select Next. Choose Music Library and select Next.    Select “songs” from the top menu, choose the songs for your playlist from your library, and select Next when finished. You can also click Select All to add all songs to your playlist, or clear all to remove them. Note: you can also sort by artist, album, genre, etc. from the top menu.   Now you can review and edit your playlist. Click the up and down pointers to move songs up and down in the playlist, or “X” to remove them. You can also go back and add additional songs by selecting Add More. Click Create when you are finished.   Auto Playlists Windows Media Center also allows you to create six different auto playlists. These are dynamic playlists based on pre-defined criteria. Auto Playlists include All Music, Music added in the last month, Music auto rated at 5 stars, Music played in the last month, Music played the most, and Music rated 4 or 5 stars. These Auto Playlists will change dynamically as your library and listening habits change. Your new music playlists can be found under playlists in the music library. Select play playlist to start the music. Now kick back and enjoy the music from your playlist. Conclusion While earlier versions of WMC allowed you to create playlists, you had to do it through Windows Media Player. This is a nice new feature for music lovers who use WMC and prefer to do everything with a remote. Do you already have playlists that you’ve created in Windows Media Player? Windows Media Center can play those too. If your playlists are in the default Music folder, Media Center will detect them automatically and add them to your Music Library. Plus, any playlists you create in Media Center are also available for Media Player. For more on creating Playlists in Media Player, check out our previous articles on how to create a custom playlist in Windows Media Player 12, and how to create auto playlists in WMP 12. Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterCreate Custom Playlists in Windows Media Player 12Using Netflix Watchnow in Windows Vista Media Center (Gmedia)How to Create Auto Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add Files TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >