Search Results

Search found 15051 results on 603 pages for 'radio shows'.

Page 526/603 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Call to a member function ... on a non-object

    - by jayceekay
    i have an object which is instantiated in an initialize file, which is called with every request. the name is right, so why is it telling me that oourls isn't an object and that redirectLoggedIn isn't its method? a var dump on oourls says NULL. but it's instantiated, and the backtrace at the bottom shows that it goes through initialization and instantiates it. pretty small snippet of code, here's the relevant bit: if($email) { global $session; $session->grantLogin($email); global $oourls; $oourls->redirectLoggedIn(); } else { return false; } and here's the output of debug_print_backtrace i threw in above the oourls method call because i'm completely confused: #0 accounts::verifyEmailRegisterAccount(37a6274c8f4bfa5c537b40e8e04d634a) called at [\public\includes\default\verifyemail.php:16] #1 require_once(\public\includes\default\verifyemail.php) called at [\support\php\ObjectOrientedURLs.class.php:48] #2 ObjectOrientedURLs->mhqqrVerifyemail(Array ([0] => 37a6274c8f4bfa5c537b40e8e04d634a)) #3 ReflectionMethod->invoke(ObjectOrientedURLs Object (), Array ([0] => 37a6274c8f4bfa5c537b40e8e04d634a)) called at [\support\php\ObjectOrientedURLs.class.php:280] #4 ObjectOrientedURLs->parseAndInvokeURL() called at [\support\php\ObjectOrientedURLs.class.php:255] #5 ObjectOrientedURLs->__construct() called at [\support\php\initialize.php:76] #6 require_once(\support\php\initialize.php) called at [\public\index.php:2]

    Read the article

  • JQM Nested Popups

    - by mcottingham
    I am having a hard time with the new Alpha release of JQM not showing nested popups. For example, I am displaying a popup form that the user is supposed to fill out, if the server side validation fails, I want to display an error popup. I am finding that the error popup is not being shown. I suspect that it is being shown below the original popup. function bindSongWriterInvite() { // Bind the click event of the invite button $("#invite-songwriter").click(function () { $("#songwriter-invite-popup").popup("open"); }); // Bind the invitation click event of the invite modal $("#songwriter-invite-invite").click(function () { if ($('#songwriter-invite').valid()) { $('#songwriter-invite-popup').popup('close'); $.ajax({ type: 'POST', async: true, url: '/songwriter/jsoncreate', data: $('#songwriter-invite').serialize() + "&songName=" + $("#Song_Title").val(), success: function (response) { if (response.state != "success") { alert("Should be showing error dialog"); mobileBindErrorDialog(response.message); } else { mobileBindErrorDialog("Success!"); } }, failure: function () { mobileBindErrorDialog("Failure!"); }, dataType: 'json' }); } }); // Bind the cancel click event of the invite modal $("#songwriter-invite-cancel").click(function () { $("#songwriter-invite-popup").popup("close"); }); } function mobileBindErrorDialog(errorMessage) { // Close all open popups. This is a work around as the JQM Alpha does not // open new popups in front of all other popups. var error = $("<div></div>").append("<p>" + errorMessage + "</p>") .popup(); $(error).popup("open"); } So, you can see that I attempt to show the error dialog regardless of whether the ajax post succeeds or fails. It just never shows up. Anyone have any thoughts? Mike

    Read the article

  • One controller with multiple models? Am I doing this correctly?

    - by user363243
    My web app, up until this point, has been fairly straight forward. I have Users, Contacts, Appointments and a few other things to manage. All of these are easy - it's just one model per section so I just did a scaffold for each, then modified the scaffolded code to fit my need. Pretty easy... Unfortunately I am having a problem on this next section because I want the 'Financials' section of my app to be more in depth than the other sections which I simply scaffolded. For example, when the user clicks the 'Contacts' link on the navigation bar, it just shows a list of contacts, pretty straight forward and is in line with the scaffold. However, when the user clicks the 'Financials' link on the navigation bar, I want to show the bank accounts on the left of the page and a few of the transactions on the right. So the financials tab will basically work with data from two models: transactions and bank_accounts. I think I should make the models (transactions & bank_accounts) and then make a controller called Financials, then I can query the models from the Financials controller and display the pages in app/views/financials/ Am I correct in this app layout? I have never worked with more than the basics of scaffolding so I want to ensure I get this right! Thank you!

    Read the article

  • Another C++ question, delete not working?

    - by kyeana
    New to c++, and am having a problem with delete and destructor (I am sure i am making a stupid mistake here, but haven't been able to figure it out as of yet). When i step through into the destructor, and attepmt to call delete on a pointer, the message shows up "Cannot access memory at address some address." The relevant code is: /* * Removes the front item of the linked list and returns the value stored * in that node. * * TODO - Throws an exception if the list is empty */ std::string LinkedList::RemoveFront() { LinkedListNode *n = pHead->GetNext(); // the node we are removing std::string rtnData = n->GetData(); // the data to return // un-hook the node from the linked list pHead->SetNext(n->GetNext()); n->GetNext()->SetPrev(pHead); // delete the node delete n; n=0; size--; return rtnData; } and /* * Destructor for a linked node. * * Deletes all the dynamically allocated memory, and sets those pointers to 0. */ LinkedListNode::~LinkedListNode() { delete pNext; // This is where the error pops up delete pPrev; pNext=0; pPrev=0; }

    Read the article

  • jQuery validation plugin: valid() does not work with remote validation ?

    - by a-dilla
    I got started by following this awesome tutorial, but wanted to do the validation on on keyup and place my errors somewhere else. The remote validation shows its own error message at the appropriate times, making me think I had it working. But if I ask specifically if a field with remote validation is valid, it says no, actually, its not. In application.js I have this... $("#new_user").validate({ rules: { "user[login]": {required: true, minlength: 3, remote: "/live_validations/check_login"}, }, messages: { "user[login]": {required: " ", minlength: " ", remote: " "}, } }); $("#user_login").keyup(function(){ if($(this).valid()){ $(this).siblings(".feedback").html("0"); }else{ $(this).siblings(".feedback").html("1"); } }) And then this in the rails app... def check_login @user = User.find_by_login(params[:user][:login]) respond_to do |format| format.json { render :json => @user ? "false" : "true" } end end I think that my problem might have everything to do with this ticket over at jQuery, and tried to implement that code, but, being new to jQuery, it's all a bit over my head. When I say bit, I mean way way. Any ideas to fix it, or a new way to look at it, would be a big help.

    Read the article

  • Eclipc java,writting a program [closed]

    - by ghassar
    I have an important exercise for that i found in the internet please i need help in using eclipc java thanks i have to design, implement, test and document a Java program (a set of classes) for one of the following problem specifications: Problem 1 – Jubilee Estate Agency Property Management System A local Estate Agent would like a prototype system to keep track of properties that are offered for sale. The Estate Agent sells domestic and commercial properties. You will need to define classes that represent the Estate Agency System. You should design your system and the classes that you will need before starting coding. Your system must have a graphical user interface and be designed and developed using the object-oriented principles of the MVC architecture design pattern i.e. the user interface class must be separate from the other classes. The initial basic requirements for the system are as follows: • Include a list of domestic properties for sale that include details of: address, description, selling price, and number of rooms • Include a list of commercial properties for sale that include details of: address, description, selling price, and area in square metres • Enable the properties that are for sale to be viewed on the screen • Allow the customer to select one or more properties to be placed on a ‘viewing list’ so that the properties can be visited in person • Display on the screen the viewing list that shows the details of the properties chosen • Provide a basic search facility to find properties that are for sale in a particular price band and display the results • Enable a property to be marked as sold

    Read the article

  • Java Daemon Threading with JNI

    - by gwin003
    I have a Java applet that creates a new non-daemon thread like so: Thread childThread = new Thread(new MyRunnable(_this)); childThread.setDaemon(false); childThread.start(); Then my MyRunnable object calls a native method that is implemented in C++: @Override public void run() { while (true) { if (!ran) { System.out.println("isDaemon: " + Thread.currentThread().isDaemon()); _applet.invokePrintManager(_applet.fFormType, _applet.fFormName, _applet.fPrintImmediately, _applet.fDataSet); ran = true; } } } This C++ method calls into a C# DLL that shows a form. My problem is, whenever the user navigates away from the page with a Java applet on it, JVM (and my C# form) is killed. I need the form and JVM to remain open until it is closed by the user. I tried setting my thread to be a non-daemon thread, which is working because System.out.println("isDaemon: " + Thread.currentThread().isDaemon() prints isDaemon: false. Is there something related to the way that the C# form is created (is there another thread I'm not accounting for) or something I am overlooking?? My thread is not a daemon thread, but the JVM is being killed anyways.

    Read the article

  • CakePHP "down for maintenance" page

    - by user1852176
    I found this post about how to create a "down for maintenance" page but I'm having some trouble getting it to work properly. define('MAINTENANCE', 1); if(MAINTENANCE > 0){ require('maintenance.php'); die(); } When I place this code in /webroot.index.php it works. However, the answer suggests adding an IP address check so that if it's down, I would still be able to view it and make sure any updates went through smoothly. So, it would look something like this define('MAINTENANCE', 0); if(MAINTENANCE > 0 && $_SERVER['REMOTE_ADDR'] !='188.YOUR.IP.HERE'){ require('maintenance.php'); die(); } The issue is, my IP address will NOT be detected by cake. I've typed echo $_SERVER['REMOTE_ADDR'] and it just shows ::1. I also tried using my user_id but I got the following error Class 'AuthComponent' not found in.... I tried /index.php and /App/index.php but the maintenance page wasn't triggered and the page loads normally.

    Read the article

  • Image switch based on if a layer is visible

    - by Zuno
    I have a website that contains multiple pages as layers (not as separate HTML files). I have three images: <img src="image1.png" onclick="showlayer(1);return false;" /> <br /> <img src="image2.png" onclick="showlayer(2);return false;" /> <br /> <img src="image3.png" onclick="showlayer(3);return false;" /> When an image is clicked, it shows the relevant layer and hides the others. I want it to also change the image to image1_active.png / image2_active.png / image3_active.png depending on which layer is visible (not via the onclick event handler). Why not via the onclick event handler?... Layer 1 is set as visible by default in the CSS, so image1 needs to be image1_active.png by default too - since the user has not had to click on anything yet, this is why I need the image switch to detect the layer's visibility/display to change the image. The showlayer script is: function showlayer(n){ for(i=1;i<=3;i++){document.getElementById("layer"+i).style.display="none";document.getElementById("layer"+n).style.display="block"; }} Is it possible to adapt this script for this purpose? thank you

    Read the article

  • Cannot get list of elements using Linq to XML

    - by Blackator
    Sample XML: <CONFIGURATION> <Files> <File>D:\Test\TestFolder\TestFolder1\TestFile.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile01.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile02.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile03.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile04.txt</File> </Files> <SizeMB>3</SizeMB> <BackupLocation>D:\Log backups\File backups</BackupLocation> </CONFIGURATION> I've been doing some tutorials but I am unable to get all the list of file inside the files element. It only shows the first element and doesn't display the rest. This is my code: var fileFolders = from file in XDocument.Load(@"D:\Hello\backupconfig1.xml").Descendants("Files") select new { File = file.Element("File").Value }; foreach (var fileFolder in fileFolders) { Console.WriteLine("File = " + fileFolder.File); } How do I display all the File in the Files element, the SizeMB and BackupLocation? Thanks

    Read the article

  • How to get jquery to append output immediately after each ajax call in a loop

    - by david_nash
    I'd like to append to an element and have it update immediately. console.log() shows the data as expected but append() does nothing until the for loop has finished and then writes it all at once. index.html: ... <body> <p>Page loaded.</p> <p>Data:</p> <div id="Data"></div> </body> test.js: $(document).ready(function() { for( var i=0; i<5; i++ ) { $.ajax({ async: false, url: 'server.php', success: function(r) { console.log(r); //this works $('#Data').append(r); //this happens all at once } }); } }); server.php: <?php sleep(1); echo time()."<br />"; ?> The page doesn't even render until after the for loop is complete. Shouldn't it at least render the HTML first before running the javascript?

    Read the article

  • setContentView taking long time (10-15 seconds) to execute

    - by Paul
    I have a large activity that contains 100 or more buttons. But it's working fine once loaded. Problem however is loading. From clicking its launch icon to getting the first view it takes 10-12 seconds. Until the first view, it shows gray title bar in black background. At least, I want to show a simple progress bar or dialog while its loading. But it seems like you cannot show anything before setContentView executed. I think I have tried everything I could without any success. If you can give me any hint or idea, I would be thankful. UPDATE: I found a dramatic resolution. It takes now a second to load the view. I didn't use splash, thread or async task at all - BTW, don't try to use thread or async on UI because Android UI is not thread-safe. Problem was that those buttons were based on a custom class that requires initialization to load same resource. - so 100 or more file operations were happening on setContentView. Making them a just single loading solved my problem.

    Read the article

  • jQuery accessing objects

    - by user1275268
    I'm trying to access the values of an object from a function I created with a callback, but have run into some trouble. I'm still fairly new at jQuery/javascript. I call the function as follows: siteDeps(id,function(data){ $.each(data,function(key,val) { console.log(key); console.log(val); }); }); The function runs 5 ajax queries from XML data and returns data as an multidimensional object; here is a excerpt showing the meat of it: function siteDeps(id,callback) { var result = { sitecontactid : {}, siteaddressid : {}, sitephoneid : {}, contactaddressid : {}, contactphoneid : {} }; ...//.... var url5 = decodeURIComponent("sql2xml.php?query=xxxxxxxxxxx"); $.get(url5, function(data){ $(data).find('ID').each(function(i){ result.delsitephoneid[i] = $(this).text(); }); }); callback(result); } The console.log output shows this: sitecontactid Object 0: "2" 1: "3" __proto__: Object siteaddressid Object 0: "1" __proto__: Object sitephoneid Object 0: "1" 1: "5" 2: "54" __proto__: Object contactaddressid Object 0: "80" __proto__: Object contactphoneid Object 0: "6" __proto__: Object How can I extract the callback data in a format I can use, for instance sitephoneid: "1","5","54" Or is there a better/simpler way to do this? Thanks in advance.

    Read the article

  • Storing data from database [mysql_num_rows]

    - by user1717305
    So I have this code to pass items from database to my order table. When I'm echoing the session. The session variable contains something so there's no problem with that. But when I echo those variables under numrows, it only shows nothing. Is there something wrong? <?php error_reporting(E_ALL ^ E_NOTICE); session_start(); require("connect.php"); $UserID = $_SESSION['CustNum']; $UserN = $_SESSION['UserName']; $ProdGTotal = $_SESSION['ProdGTotal']; $queryord = mysql_query("SELECT * FROM customer WHERE UserName = '$UserN'"); $numrows = mysql_num_rows($queryord); if(numrows == 1){ $row = mysql_fetch_assoc($queryord)or die ('Unable to run query:'.mysql_error()); // fetch associated: get function from a query for a database $dbstreet = $row['Street']; $dhousenum = $row['HouseNum']; $dbcnum = $row['CelNum']; $dbarea = $row['Area']; $dbbuilding = $row['Building']; $dbcity = $row['City']; $dbpnum = $row['PhoneNum']; $dbfname = $row['FName']; $dblname = $row['LName']; } else die(mysql_error()); $query4=mysql_query("INSERT INTO orderdetails VALUES ('', '$UserID', Now(), '$dbhousenum', '$dbstreet', '$dbarea', '$dbbuilding', '$dbcity', '$dbfname', '$dblname', '$dbcnum', '$dbpnum', '$ProdGTotal')",$connect); if ($query4){ header("location:index.php"); } else die(mysql_error()); ?>

    Read the article

  • CSS Multiple Divs set to 100% Height

    - by Slevin
    I understand this question is redundant but I was unable to locate an answer from my searches on here and other online forums. Here is my situation. http://www.ci.fayetteville.nc.us/CityCommon/port/contact.html On that page I have a 'separator' line that is to extend to the bottom of the page. Now, I have thrown in plenty of break tags to stretch the page. This shows that the background image (used as a footer images in a way) stretches to the bottom of the page fine. (That image is contained within div#content. My question is how can I additionally get my div#rightContent to stretch just the same way? I have my html, body and container heights all specific at 100% as well as another container div called #content. I am pretty stumped. At the link you can view my source and hopefully point me in a good direction to achieve this. Any help would be greatly appreciated. Thanks.

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Installing the Updated XP Mode which Requires no Hardware Virtualization

    - by Mysticgeek
    Good news for those of you who have a computer without Hardware Virtualization, Microsoft had dropped the requirement so you can now run XP Mode on your machine. Here we take a look at how to install it and getting working on your PC. Microsoft has dropped the requirement that your CPU supports Hardware Virtualization for XP Mode in Windows 7. Before this requirement was dropped, we showed you how to use SecureAble to find out if your machine would run XP Mode. If it couldn’t, you might have gotten lucky with turning Hardware Virtualization on in your BIOS, or getting an update that would enable it. If not, you were out of luck or would need a different machine. Note: Although you no longer need Hardware Virtualization, you still need Professional, Enterprise, or Ultimate version of Windows 7. Download Correct Version of XP Mode For this article we’re installing it on a Dell machine that doesn’t support Hardware Virtualization on Windows 7 Ultimate 64-bit version. The first thing you’ll want to do is go to the XP Mode website and select your edition of Windows 7 and language. Then there are three downloads you’ll need to get from the page. Windows XP Mode, Windows Virtual PC, and the Windows XP Mode Update (All Links Below). Windows genuine validation is required before you can download the XP Mode files. To make the validation process easier you might want to use IE when downloading these files and validating your version of Windows. Installing XP Mode After validation is successful the first thing to download and install is XP Mode, which is easy following the wizard and accepting the defaults. The second step is to install KB958559 which is Windows Virtual PC.   After it’s installed, a reboot is required. After you’ve come back from the restart, you’ll need to install KB977206 which is the Windows XP Mode Update.   After that’s installed, yet another restart of your system is required. After the update is configured and you return from the second reboot, you’ll find XP Mode in the Start menu under the Windows Virtual PC folder. When it launches accept the license agreement and click Next. Enter in your log in credentials… Choose if you want Automatic Updates or not… Then you’re given a message saying setup will share the hardware on your computer, then click Start Setup. While setup completes, you’re shown a display of what XP Mode does and how to use it. XP Mode launches and you can now begin using it to run older applications that are not compatible with Windows 7. Conclusion This is a welcome news for many who want the ability to use XP Mode but didn’t have the proper hardware to do it. The bad news is users of Home versions of Windows still don’t get to enjoy the XP Mode feature officially. However, we have an article that shows a great workaround – Create an XP Mode for Windows 7 Home Versions & Vista. Download XP Mode, Windows Virtual PC, and Windows XP Mode Update Similar Articles Productive Geek Tips Our Look at XP Mode in Windows 7Run XP Mode on Windows 7 Machines Without Hardware VirtualizationInstall XP Mode with VirtualBox Using the VMLite PluginUnderstanding the New Hyper-V Feature in Windows Server 2008How To Run XP Mode in VirtualBox on Windows 7 (sort of) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • ActionResult types in MVC2

    - by rajbk
    In ASP.NET MVC, incoming browser requests gets mapped to a controller action method. The action method returns a type of ActionResult in response to the browser request. A basic example is shown below: public class HomeController : Controller { public ActionResult Index() { return View(); } } Here we have an action method called Index that returns an ActionResult. Inside the method we call the View() method on the base Controller. The View() method, as you will see shortly, is a method that returns a ViewResult. The ActionResult class is the base class for different controller results. The following diagram shows the types derived from the ActionResult type. ASP.NET has a description of these methods ContentResult – Represents a text result. EmptyResult – Represents no result. FileContentResult – Represents a downloadable file (with the binary content). FilePathResult – Represents a downloadable file (with a path). FileStreamResult – Represents a downloadable file (with a file stream). JavaScriptResult – Represents a JavaScript script. JsonResult – Represents a JavaScript Object Notation result that can be used in an AJAX application. PartialViewResult – Represents HTML and markup rendered by a partial view. RedirectResult – Represents a redirection to a new URL. RedirectToRouteResult – Represents a result that performs a redirection by using the specified route values dictionary. ViewResult – Represents HTML and markup rendered by a view. To return the types shown above, you call methods that are available in the Controller base class. A list of these methods are shown below.   Methods without an ActionResult return type The MVC framework will translate action methods that do not return an ActionResult into one. Consider the HomeController below which has methods that do not return any ActionResult types. The methods defined return an int, object and void respectfully. public class HomeController : Controller { public int Add(int x, int y) { return x + y; }   public Employee GetEmployee() { return new Employee(); }   public void DoNothing() { } } When a request comes in, the Controller class hands internally uses a ControllerActionInvoker class which inspects the action parameters and invokes the correct action method. The CreateActionResult method in the ControllerActionInvoker class is used to return an ActionResult. This method is shown below. If the result of the action method is null, an EmptyResult instance is returned. If the result is not of type ActionResult, the result is converted to a string and returned as a ContentResult. protected virtual ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) { if (actionReturnValue == null) { return new EmptyResult(); }   ActionResult actionResult = (actionReturnValue as ActionResult) ?? new ContentResult { Content = Convert.ToString(actionReturnValue, CultureInfo.InvariantCulture) }; return actionResult; }   In the HomeController class above, the DoNothing method will return an instance of the EmptyResult() Renders an empty webpage the GetEmployee() method will return a ContentResult which contains a string that represents the current object Renders the text “MyNameSpace.Controllers.Employee” without quotes. the Add method for a request of /home/add?x=3&y=5 returns a ContentResult Renders the text “8” without quotes. Unit Testing The nice thing about the ActionResult types is in unit testing the controller. We can, without starting a web server, create an instance of the Controller, call the methods and verify that the type returned is the expected ActionResult type. We can then inspect the returned type properties and confirm that it contains the expected values. Enjoy! Sulley: Hey, Mike, this might sound crazy but I don't think that kid's dangerous. Mike: Really? Well, in that case, let's keep it. I always wanted a pet that could kill me.

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • Developer’s Life – Every Developer is a Superman

    - by Pinal Dave
    I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Everyone has their own favorite, but Superman has been the longest enduring of all comic book characters.  Clark Kent has inspired multiple movie series, TV shows, books, cartoons, and costumes.  Superman’s enduring popularity has been attributed to his superhuman strength, integrity, dedication to good, and his humility in keeping his identity a secret. So how are developers like Superman? Well, read on my list of reasons. Secret Identities They have secret identities.  I’m not saying that all developers wear thick glasses and go by an alias like “Clark Kent.”  But developers certainly work in the background, making sure everything runs smoothly, often without recognition.  Like Superman, when they have done their job right, no one knows they were there. Working Alone You don’t have to work alone.  Superman doesn’t have a sidekick like Robin or Bat Girl, but he is a major player in the Justice League.  Developers have amazing skills, and they shouldn’t be afraid to unite those skills to solve some of the world’s major problems (like slow networks). Daily Inspiration Developers are inspiring.  Clark Kent works at The Daily Planet, Metropolis’ newspaper, which is lucky because he can keep some of the publicity Superman inspires under wraps.  Developers might go unnoticed sometimes, but when people hear about some of the tasks they accomplish on a daily basis, it inspires awe. Discover Your Superpowers You have to discover your superpowers.  Clark Kent didn’t just wake up one morning with the full understanding that he could fly, leap tall buildings in a single bound, and was stronger than a speeding locomotive.  He slowly discovered these powers (after a few comic book-worthy misunderstandings!).  Developers are always learning and growing as well.  You probably won’t wake up with super powers, either, but years of practice and continuing education can get you close. Every Day is a New Day The story continues.  The Superman comic books are still being printed, and have been in print since 1938.  There have been two TV series, (one, Smallville, was on TV for ten seasons) and multiple cartoon adaptations.  There have been multiple movies, with many different actors.  A new reboot came out last year, and another is set to premier in 2016.   So, developers, when you are having a bad day or a problem seems unsolvable – remember, the story will continue!  There is always tomorrow. I hope you are all enjoying reading about developers-as-superheroes as much as I am enjoying writing about them.  Please tell me how else developers are like Superheroes in the comments – especially if you know any developers who are faster than a speeding bullet and can leap tall buildings in a single bound. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Can't install git on Ubuntu 12.10

    - by Lucas Windir
    I'm following these instructions to install git on my laptop: http://git-scm.com/download/linux When I do: $ sudo apt-get install git-core This is what my terinal shows: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? E: Some packages could not be authenticated lucas@lucas-Inspiron-N5050:~$ sudo apt-get install git-core Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? y Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main liberror-perl all 0.17-1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-man all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git amd64 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-core all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberrorperl_0.17-1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.10.4-1ubuntu1_amd64.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch http://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-core_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? How could I install git on Ubuntu 12.10? I can't even do it from the Ubuntu Software Center. Thanks in advance!

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Raspberry Pi entrance signed backed by Umbraco - Part 1

    - by Chris Houston
    Being experts on all things Umbraco, we jumped at the chance to help our client, QV Offices, with their pressing signage predicament. They needed to display a sign in the entrance to their building and approached us for our advice. Of course it had to be electronic: displaying multiple names of their serviced office clients, meeting room bookings and on-the-pulse promotions. But with a winding Victorian staircase and minimal storage space how could the monitor be run, updated and managed? That’s where we came in…Raspberry PiUmbraco CMSAutomatic updatesAutomated monitor of the signPower saving when the screen is not in useMounting the screenThe screen that has been used is a standard LED low energy Full HD screen and has been mounted on the wall using it's VESA mounting points, as the wall is a stud wall we were able to add an access panel behind the screen to feed through the mains, HDMI and sensor cables.The Raspberry Pi is then tucked away out of sight in the main electrical cupboard which just happens to be next to the sign, we had an electrician add a power point inside this cupboard to allow us to power the screen and the Raspberry Pi.Designing the interface and editing the contentAlthough a room sign was the initial requirement from QV Offices, their medium term goal has always been to add online meeting booking to their website and hence we suggested adding information about the current and next day's meetings to the sign that would be pulled directly from their online booking system.We produced the design and built the web page to fit exactly on a 1920 x 1080 screen (Full HD in Portrait)As you would expect all the information can be edited via an Umbraco CMS, they are able to add floors, rooms, clients and virtual clients as well as add meeting bookings to their meeting diary.How we configured the Raspberry PiAfter receiving a new Raspberry Pi we downloaded the latest release of Raspbian operating system and followed the official guide which shows how to copy the OS onto an SD card from a Mac, we then followed the majority of steps on this useful guide: 10 Things to Do After Buying a Raspberry Pi.Installing ChromiumWe chose to use the Chromium web browser which for those who do not know is the open sourced version of Google Chrome. You can install this from the terminal with the following command:sudo apt-get install chromium-browserInstalling UnclutterWe found this little application which automatically hides the mouse pointer, it is used in the script below and is installed using the following command:sudo apt-get install unclutterAuto start Chromium and disabling the screen saver, power saving and mouseWhen the Raspberry Pi has been installed it will not have a keyboard or mouse and hence if their was a power cut we needed it to always boot and re-loaded Chromium with the correct URL.Our preferred command line text editor is Nano and I have assumed you know how to use this editor or will be able to work it out pretty quickly.So using the following command:sudo nano /etc/xdg/lxsession/LXDE/autostartWe then changed the autostart file content to:@lxpanel --profile LXDE@pcmanfm --desktop --profile LXDE@xscreensaver -no-splash@xset s off@xset -dpms@xset s noblank@chromium --kiosk --incognito http://www.qvoffices.com/someURL@unclutter -idle 0The first few commands turn off the screen saver and power saving, we then open Cromium in Kiosk Mode (full screen with no menu etc) and pass in the URL to use (I have changed the URL in this example) We found a useful blog post with the Cromium command line switches.Finally we also open an application called Unclutter which auto hides the mouse after 0 seconds, so you will never see a mouse on the sign.We also had to edit the following file:sudo nano /etc/lightdm/lightdm.confAnd added the following line under the [SeatDefault] section:xserver-command=X -s 0 dpmsRefreshing the screenWe decided to try and add a scheduled task that would trigger Chromium to reload the page, at some point in the future we might well change this to using Javascript to update the content, but for now this works fine.First we installed the XDOTool which enables you to script Keyboard commands:sudo apt-get install xdotoolWe used the Refreshing Chromium Browser by Shell Script post as a reference and created the following shell script (which we called refreshing.sh):export DISPLAY=":0"WID=$(xdotool search --onlyvisible --class chromium|head -1)xdotool windowactivate ${WID}xdotool key ctrl+F5This selects the correct display and then sends a CTRL + F5 to refresh Chromium.You will need to give this file execute permissions:chmod a=rwx refreshing.shNow we have the script file setup we just need to schedule it to call this script periodically which is done by using Crontab, to edit this you use the following command:crontab -eAnd we added the following:*/5 * * * * DISPLAY=":.0" /home/pi/scripts/refreshing.sh >/home/pi/cronlog.log 2>&1This calls our script every 5 minutes to refresh the display and it logs any errors to the cronlog.log file.SummaryQV Offices now have a richer and more manageable booking system than they did before we started, and a great new sign to boot.How could we make sure that the sign was running smoothly downstairs in a busy office centre? A second post will follow outlining exactly how Vizioz enabled QV Offices to monitor their sign simply and remotely, from the comfort of their desks.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >