Search Results

Search found 5586 results on 224 pages for 'global illumination'.

Page 156/224 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Inline template efficiency

    - by Darryl Gove
    I like inline templates, and use them quite extensively. Whenever I write code with them I'm always careful to check the disassembly to see that the resulting output is efficient. Here's a potential cause of inefficiency. Suppose we want to use the mis-named Leading Zero Detect (LZD) instruction on T4 (this instruction does a count of the number of leading zero bits in an integer register - so it should really be called leading zero count). So we put together an inline template called lzd.il looking like: .inline lzd lzd %o0,%o0 .end And we throw together some code that uses it: int lzd(int); int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } We compile the code with some amount of optimisation, and look at the resulting code: $ cc -O -xtarget=T4 -S lzd.c lzd.il $ more lzd.s .L77000018: /* 0x001c 11 */ lzd %o0,%o0 /* 0x0020 9 */ ld [%i1],%i3 /* 0x0024 11 */ st %o0,[%i2] /* 0x0028 9 */ add %i3,1,%i0 /* 0x002c */ cmp %i0,999 /* 0x0030 */ ble,pt %icc,.L77000018 /* 0x0034 */ st %i0,[%i1] What is surprising is that we're seeing a number of loads and stores in the code. Everything could be held in registers, so why is this happening? The problem is that the code is only inlined at the code generation stage - when the actual instructions are generated. Earlier compiler phases see a function call. The called functions can do all kinds of nastiness to global variables (like 'a' in this code) so we need to load them from memory after the function call, and store them to memory before the function call. Fortunately we can use a #pragma directive to tell the compiler that the routine lzd() has no side effects - meaning that it does not read or write to memory. The directive to do that is #pragma no_side_effect(<routine name), and it needs to be placed after the declaration of the function. The new code looks like: int lzd(int); #pragma no_side_effect(lzd) int a; int c=0; int main() { for(a=0; a<1000; a++) { c=lzd(c); } return 0; } Now the loop looks much neater: /* 0x0014 10 */ add %i1,1,%i1 ! 11 ! { ! 12 ! c=lzd(c); /* 0x0018 12 */ lzd %o0,%o0 /* 0x001c 10 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000018 /* 0x0024 */ nop

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Microsoft Azure News: Capturing VM Images

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/05/21/microsoft-azure-news-capturing-vm-images.aspxIf you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive. No more! A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server. And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • How to troubleshoot ethernet port on laptop

    - by Psallas Vassilios
    I have a problem with my wired connection. To be more specific, my laptop doesn't seems to recognize that I have plugged in an ethernet cable. I tried to download new drivers for my ethernet card, but I couldn't find any solutions. Maybe because I am new to Linux, so I'm not familiar with running commands in the terminal. OK I have typed the command and here are the results: 00:04.0 Ethernet controller [0200]: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter [1039:0191] (rev 02) For the second reply I don't know if the following is what you asked me: ?Memory: 3.9 GiB ?Processor: Intel Core 2 Duo CPU P8800 @ 2.66GHz × 2 ?OS type: 32-bit My Ethernet connection had some problem on Windows too. I have changed recently my internet provider, and since then my ethernet cable is not recognized by the laptop. At that time I was still on Windows. I thought that with Ubuntu the problem would be solved, but unfortunately the problem still persists. If someone can help me to solve my problem I'll be thankful. Here are the results of the three first commands you told me to run: lsmod | grep sis190 sis190 22570 0 sudo modprobe sis190 ifconfig eth0 Link encap:Ethernet HWaddr 00:90:f5:90:81:7e UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:185 errors:0 dropped:0 overruns:0 frame:0 TX packets:185 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:22672 (22.6 KB) TX bytes:22672 (22.6 KB) wlan0 Link encap:Ethernet HWaddr 00:25:d3:2c:3a:ae inet addr:192.168.1.72 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::225:d3ff:fe2c:3aae/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:260 errors:0 dropped:0 overruns:0 frame:0 TX packets:363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:71992 (71.9 KB) TX bytes:52000 (52.0 KB) and the results of running the last two commands: dmesg | grep -e eth -e sis190 [ 0.816667] sis190: sis190 Gigabit Ethernet driver 1.4 loaded [ 0.816728] sis190 0000:00:04.0: setting latency timer to 64 [ 0.816751] sis190: 0000:00:04.0: Read MAC address from EEPROM [ 0.904032] sis190: 0000:00:04.0: Realtek PHY RTL8201 transceiver at address [ 1.416030] sis190: 0000:00:04.0: Using transceiver at address 1 as default [ 1.448235] sis190 0000:00:04.0: eth0: 0000:00:04.0: SiS 191 PCI Gigabit Ethernet adapter at f8410000 (IRQ: 19), 00:90:f5:90:81:7e [ 1.448238] sis190 0000:00:04.0: eth0: GMII mode. [ 1.448243] sis190 0000:00:04.0: eth0: Enabling Auto-negotiation [ 11.560907] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372019] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.372265] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 26.424038] sis190 0000:00:04.0: eth0: auto-negotiating... nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: sis190 State: unavailable Default: no HW Address: 00:90:F5:90:81:7E Capabilities: Carrier Detect: yes Wired Properties Carrier: off

    Read the article

  • Can see samba shares but not access them

    - by nitefrog
    For the life of me I cannot figure this one out. I have samba installed and set up on the ubuntu box and on the Win7 box I CAN SEE all the shares I created. I created two users on ubuntu that map to the users in windows. On ubuntu they are both admins, user A & B on Windows User A is admin and user B is poweruser. User A can see both shares and access them, but user B can see everythin, but only access the homes directory, the other directory throws an error. I have two drives in Ubuntu and this is the smb.config file (I am new to samba): [global] workgroup = WORKGROUP server string = %h server (Samba, Ubuntu) wins support = no dns proxy = yes name resolve order = lmhosts host wins bcast log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ; usershare max shares = 100 usershare allow guests = yes And here is the share section: Both user A & B can access this from windows. No problems. [homes] comment = Home Directories browseable = no writable = yes Both User A & B can see this share, but only user A can access it. User B get an error thrown. [stuff] comment = Unixmen File Server path = /media/data/appinstall/ browseable = yes ;writable = no read only = yes hosts allow = The permission for the media/data/appinstall/ is as follows: appInstall properties: share name: stuff Allow others to create and delete files in this folder is cheeked Guest access (for people without a user account) is checked permissions: Owner: user A Folder Access: Create and delete files File Access: --- Group: user A Folder Access: Create and delete files File Access: --- Others Folder Access: Create and delete files File Access: --- I am at a loss and need to get this work. Any ideas? The goal is to have a setup like this. 3 users on window machines. Each user on the data drive will have their own personal folder where they are the ones that can only access, then another folder where 2 of the users will have read only and one user full access. I had this setup before on windows, but after what happened I am NEVER going back to windows, so Unix here I am to stay! I am really stuck. I am running Ubuntu 11. I could reformat again and put on version 10 if that would make life easier. I have been dealing with this since Wed. 3pm. Thanks.

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Nautilus ignores / misinterprets view size

    - by BlueZero4
    I noticed that a lot of my folders had suddenly switched to higher view sizes than I had specificied. I was assuming that somehow nautilus had suddenly decided to create per-folder entries for said folders with incorrect view sizes. So I found this question: How to reset all per-folder view settings in nautilus? I found the folder specified in the answer (~/.local/share/gvfs-metadata) and found that it was actually important to delete the files INSIDE the folder, because for some reason deleting the folder itself didn't work for some reason. After doing that, I discovered that the odd setting was for the default view settings, not for a handful of files. Nautilus actually handles the per-folder settings like it should, but it ignores the global folder settings. I want Nautilus to, by default, display all non-specified folders as compact view, 50%. My folders are using the compact setting like I want, but they are not down to 50%. At a guess, they are at 100%. Altering the view size of the icon view can set the compact view to 33%, but I'm not sure by what mechanism this functions. I haven't extensively tested the other view sizes because I don't plan on using them much at all. Next I looked up questions like How do I reset nautilus to the default configuration? I'm expecting the problem to be a corrupted config file or something of the sort, so I hunted down directories like ~/.nautilus, ~/.gconf/apps/nautilus, and ~/.gnome2/nautilus. (I don't have a ~/.nautilus directory, so I'm assuming that's only for older versions.) I attempted to remove the contents of each, but I can't seem to force Nautilus back to default configuration settings. Actually viewing Nautilus's preferences in GConf made the settings look like they were what I wanted them to be, which is odd. I'd like to force Nautilus to default settings, basically. Though if something else will fix it, I'll take it too. I'm not interested in doing a full uninstall, reinstall of Nautilus if I don't have to. ==EDIT1== Turns out that Nautilus just writes the settings in GConf for the heck of it. Nautilus only really uses the settings that it stores in DConf. I did gsettings reset-recursively org.gnome.nautilus, which actually did reset Nautilus to default, but it still doesn't like my view size settings.

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • Creating classed in JavaScript

    - by Renso
    Goal:Creating class instances in JavaScript is not available since you define "classes" in js with object literals. In order to create classical classes like you would in c#, ruby, java, etc, with inheritance and instances.Rather than typical class definitions using object literals, js has a constructor function and the NEW operator that will allow you to new-up a class and optionally provide initial properties to initialize the new object with.The new operator changes the function's context and behavior of the return statement.var Person = function(name) {   this.name = name;};   //Init the personvar dude= new Person("renso");//Validate the instanceassert(dude instanceof Person);When a constructor function is called with the new keyword, the context changes from global window to a new and empty context specific to the instance; "this" will refer in this case to the "dude" context.Here is class pattern that you will need to define your own CLASS emulation library:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   return _class;}var Person a new Class();Person.prototype.init = function() {};var person = new Person;In order for the class emulator to support adding functions and properties to static classes as well as object instances of People, change the emulator:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   _class.fn = _class.prototype;   _class.fn.parent = _class;   //adding class properties   _class.extend = function(obj) {      var extended = obj.extended;      for(var i in obj) {         _class[i] = obj[i];      };      if(extended) extended(_class);   };   //adding new instances   _class.include = function(obj) {      var included = obj.included;      for(var i in obj) {         _class.fn[i] = obj[i];      };      if(included) included(_class);   };   return _class;}Now you can use it to create and extend your own object instances://adding static functions to the class Personvar Person = new Class();Person.extend({   find: function(name) {/*....*/},      delete: function(id) {/*....*/},});//calling static function findvar person = Person.find('renso');   //adding properties and functions to the class' prototype so that they are available on instances of the class Personvar Person = new Class;Person.extend({   save: function(name) {/*....*/},   delete: function(id) {/*....*/}});var dude = new Person;//calling instance functiondude.save('renso');

    Read the article

  • Releasing the new Sample Browser Phone app

    - by Jialiang
    Originally posted on: http://geekswithblogs.net/Jialiang/archive/2014/06/05/releasing-the-new-sample-browser-phone-app.aspx Starting its journey in 2010, Sample Browser is achieving its tetralogy by releasing a Windows Phone version Sample Browser today. The new Windows Phone app is the fourth milestone of Sample Browser since we released the desktop version and the Visual Studio version in 2012 and the Windows Store version in 2013. This time, by providing a sample browser designed for a ‘walking’ platform in response to MVPs’ suggestions during last year’s MVP Global Summit, we are literally putting a world of code samples "at developers’ fingertips”. If you like to have a code gallery of over 7000 quality code samples in your pocket, then click here to download our Windows Phone Sample Browser and start a fantastic mobile experience. With Windows Phone version Sample Browser and the Internet, you can search for code samples on MSDN at anytime and anywhere you want, 24/7 and–even to bed. You can also check code sample details and share them with your friends. Compared to the other 3 pieces in the tetralogy (desktop version, Visual Studio version, and the Windows Store version), the Windows Phone version Sample Browser sells itself for convenience and instant connectivity. For those who need to reach code samples under mobile circumstances where no PCs is available, Windows Phone version Sample Browser will definitely be the right service you are seeking for. Aside from sharing samples via emails as the other 3 do, the Windows Phone version Sample Browser also allows you to share the sample via SMS and Near Field Communication (NFC).   What's Next Currently, the Windows Phone Sample Browser only supports online MSDN code searching, but we already plan to upgrade Sample Browser to allow users to do ‘Bing code search’, and add and manage their private code snippets.  We will also upgrade the app to universal app. Universal App is a new concept brought up in the Microsoft Build Developer Conference 2014. It is a new development model that allows for a single app to be deployed across multiple Windows devices such as Windows Phone, Windows 8.1, and XBox. Therefore, once we finish upgrading Sample Browser to a universal app, you can synchronize your own code snippets across different devices; You can also mark a code sample as favorite on your Windows Phone and continue to study the sample when you are on your desktop. By then, sharing data between platforms will be a piece of cake. Also, the user experience of Sample Browser on different platforms will be more consistent.  The best is yet to come!   We sincerely suggest you give Sample Browser a try (click here to download). If you love what you see in Sample Browser, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions for us, please contact us at [email protected]. Your precious opinions and comments are more than welcome.

    Read the article

  • Develop web site from existing software or cherry pick and use a web framework?

    - by erisco
    A small team and I are tasked with developing a web site. The client has referenced a particular open source project (we'll call it X) when describing some of the features. Because of this, the team wants to start with X and adapt it to satisfy the client. I have looked at X and its code and, in my opinion, it would be unwise. However, my experience is limited, and could really benefit from the insights of others so that I can figure out what I should be asserting as the right direction for the team. My red flags are going up and this is why. X was developed in the earlier days of PHP; 500 line blocks of code are the norm; global variables are abundant; giant switch cases are the norm for switching between which page is shown. There is no clear mapping between URL and where the code for that page sits. From a feature-set standpoint, X is actually software specialized for a different task and has dozens of features we don't need or have use for that come as core assumptions. We will be unable to adapt X through its plugin system. That said, there are a few features which can be mapped, with some modification, to suit our purposes. I believe this is the attraction the team feels. I would feel comfortable if, instead of using X directly, we lifted what is salvageable and useful to us. We can then use that code, and the same 3rd party libraries X is using, in a new code base built on top of a PHP web framework (particularly Agavi, so you understand what I mean by 'web framework'). The web framework gives us a strong MVC structure and provides the common facilities for web development, or adapters to work with 3rd party libraries that do so. We will also have a clean slate feature-wise to work from, which means we can work additively instead of subtractively. Because the code base is better structured, and contains none of what we don't need, it will be easier to document, which is a critical requirement of our client. So to summarize, the team wants to use X, whereas I want to take the bits we can from X and use a web framework instead. I want to bounce this opinion off of other's experiences so that I can be more informed. Thanks for your insight.

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • Brain Teaser: How Did I Do This (Part 1: The Solution)

    - by Geertjan
    In Part 1: The Challenge, published this time last week, I introduced a "brain teaser". The brain teaser asks you to figure out how to allow images and other files to be meaningfully dropped onto a NetBeans Platform application, i.e., on the drop something useful should happen with the dropped file: if the file is an image, the image should open in the IDE; if the file is a PDF document, the PDF viewer should open externally; if the file is a text file, it should open as a text in the IDE, etc. Solution. And here is the solution: http://bits.netbeans.org/dev/javadoc/org-openide-windows/org/openide/windows/ExternalDropHandler.html When an implementation of the "ExternalDropHandler" class is available in the global Lookup, and an object is being dragged over some part of the main window, the window system may call the methods of this class to decide whether it can accept or reject the drag operation. And when the object is actually dropped, this class will be asked to handle the drop. OK, so go ahead and implement the above class and put it into the Lookup. Or... guess what? The NetBeans Platform has a default implementation of the above class, appropriately named "DefaultExternalDropHandler". Not only is this useful to learn about how to implement the ExternalDropHandler class (i.e., by reading the source here): you can simply include the module that contains this class in your own NetBeans Platform application and then your application will be able to receive external drag/drop events and do something meaningful with them thanks to the DefaultExternalDropHandler. Do this: Open your NetBeans Platform application in NetBeans IDE. Right-click the application in the Projects window and choose Properties. In the Libraries tab, expand the "ide" cluster, and select "User Utilities". (That's where "DefaultExternalDropHandler.java" is found and registered in the Lookup.) Now click the "Resolve" button, if it appears, because some additional related modules need to now be included, if they haven't been included yet. Again in the "ide" cluster in the Libraries tab, select "Image". That's the Image Editor. Click OK. Run the application. Drag an image or some other type of file into your application, from outside the application, and you'll see the application tries to handle the drop. If the file being dragged is an image, it will open in the Image Editor, which you included in the previous step of these instructions. Hurray, you're done. Without any programming at all, you've added a cool new feature to your application.

    Read the article

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • EBusiness Maintenace Wizard

    - by cwarticki
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Seriously folks, you'd be amazed by the power and functionality of this tool.  If you're an EBus customer, you must use the Maintenance Wizard.  I know customers that have logged 2000+ SRs doing EBus upgrades the hard way and others that have use the Maintenance Wizard and have performed production upgrades on 7 global instances with only a handful of SRs.  You decide which is better. Oh, btw......it's part of your Premier Support investment. No additional cost necessary. -Chris Warticki The Maintenance Wizard is an E-Business Suite upgrade tool that can guide you through the code line upgrade process from 11.5.10.2 to 12.1.3 with an 11gR2 database. Additionally, it includes maintenance features for most releases of E-Business Suite applications. The Tool: * Presents step-by-step upgrade and maintenance processes * Enables validation of each step, tracks the completion of the steps, and maintains a log and status * Is a multi-user tool that enables the System Administrator to give different users assignments based on any combination of category, product family or task * Automatically installs many required patches * Provides project management utilities to record the time taken for each task, completion status and project reporting For More Information: * Review Note 215527.1 for additional information on the Maintenance Wizard * See Note 430732.1 to download the new Patch Sincerely, Oracle Proactive Support Center

    Read the article

  • Create simple jQuery plugin

    - by ybbest
    In the last post, I have shown you how to add the function to jQuery. In this post, I will show you how to create plugin to achieve this. 1. You need to wrap your code in the following construct, this is because you should not use $ directly as $ is global variable, it could have clash with some other library which also use $.Basically, you can pass in jQuery object into the function, so that $ is made available inside the function. (JavaScript use function to create scope, so you can make sure $ is referred to jQuery inside the function ) (function($){ //Your code goes here. }; })(jQuery); 2. Put your code into the construct above. (function ($) { $.getParameterByName = function (name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); }; })(jQuery); 3. Now you can reference the code into you project and you can call the method in you JavaScript References: Provides scope for variables Variables are scoped at the function level in javascript. This is different to what you might be used to in a language like C# or Java where the variables are scoped to the block. What this means is if you declare a variable inside a loop or an if statement, it will be available to the entire function. If you ever find yourself needing to explicitly scope a variable inside a function you can use an anonymous function to do this. You can actually create an anonymous function and then execute it straight away and all the variables inside will be scoped to the anonymous function: (function() { var myProperty = "hello world"; alert(myProperty); })(); alert(typeof(myProperty)); // undefined How does an anonymous function in JavaScript work? Building Your First jQuery Plugin A Plugin Development Pattern

    Read the article

  • A solution for a PHP website without a framework

    - by lortabac
    One of our customers asked us to add some dynamic functionality to an existent website, made of several static HTML pages. We normally work with an MVC framework (mostly CodeIgniter), but in this case moving everything to a framework would require too much time. Since it is not a big project, not having the full functionality of a framework is not a problem. But the question is how to keep code clean. The solution I came up with is to divide code in libraries (the application's API) and models. So inside HTML there will only be API calls, and readability will not be sacrificed. I implemented this with a sort of static Registry (sorry if I'm wrong, I am not a design pattern expert): <?php class Custom_framework { //Global database instance private static $db; //Registered models private static $models = array(); //Registered libraries private static $libraries = array(); //Returns a database class instance static public function get_db(){ if(isset(self::$db)){ //If instance exists, returns it return self::$db; } else { //If instance doesn't exists, creates it self::$db = new DB; return self::$db; } } //Returns a model instance static public function get_model($model_name){ if(isset(self::$models[$model_name])){ //If instance exists, returns it return self::$models[$model_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/models/' . $model_name . '.php')){ include_once ROOT_DIR . 'application/models/' . $model_name . '.php'; self::$models[$model_name] = new $model_name; return self::$models[$model_name]; } else { return FALSE; } } } //Returns a library instance static public function get_library($library_name){ if(isset(self::$libraries[$library_name])){ //If instance exists, returns it return self::$libraries[$library_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/libraries/' . $library_name . '.php')){ include_once ROOT_DIR . 'application/libraries/' . $library_name . '.php'; self::$libraries[$library_name] = new $library_name; return self::$libraries[$library_name]; } else { return FALSE; } } } } Inside HTML, API methods are accessed like this: <?php echo Custom_framework::get_library('My_library')->my_method(); ?> It looks to me as a practical solution. But I wonder what its drawbacks are, and what the possible alternatives.

    Read the article

  • “It Isn’t Easy At All; Otherwise, Everyone Would Be Doing It”

    - by Kathryn Perry
    A few months ago, JP Saunders (pictured left), who leads the go-to-market initiatives for the Oracle CX Service offering, kicked off a series of articles about modern customer service. He contends that to take care of customers?and the people that support those customers?companies need to make it easy to deliver consistently great experiences. But it’s not easy; it’s an art. The six posts in The Art of Easy series will help you better understand some of the customer service challenges you face and how to avoid common pitfalls. We pulled them all together here in one post for continuity and easy access. Saunders introduces the series with The Art of Easy: Make It Easy To Deliver Great Customer Service Experiences (Part 1). The Art of Easy: Offer Self Service With the Emphasis on Service (Part 2) by David Fulton (pictured left): David Fulton, Director of Product Management, Oracle Service Cloud, shares five tenets of customer self service that move an organization closer to becoming a modern customer service business. Easy Decisions For Complex Problems (Part 3) by Heike Lorenz (pictured right): Heike Lorenz, Director of Global Product Marketing, Policy Automation, writes about automating service policies to ensure that the correct decisions are being applied to the right people. The goal is to nurture the trusted relationships with customers during complex decision-making processes. Moving at the Speed of Easy (Part 4) by Chris Ulmand (pictured left): Chris Omland, Director of Product Management, Oracle Service Cloud, addresses the need for speed to keep up with customers’ expectations. His advice—start with a platform that enables agile innovation, respects a company’s unique needs, and has proven reliability to protect customer relationships. Knowledge Makes It Easy For Everyone (Part 5) by Nav Chakravarti (pictured rig: Vice President Nav Chakravarti, Oracle Service Cloud, talks about managing the knowledge that customers need and want. He coaches readers on delivering answers to customers’ questions easily, in context, with relevance, reliably, and accurately. Making Easy, Both Effective and Efficient (Part 6) by Melinda Uhland (pictured left): Melinda Uhland, Oracle CX Product Management teaches us that happy agents produce happy customers. A Modern Customer Service organization is one that invests in its agents and empowers them with tools to make them efficient and effective, which, in turn, improves customer results.

    Read the article

  • Creating a Yes/No MessageBox in a NuGet install/uninstall script

    - by ParadigmShift
    Sometimes getting a little feedback during the install/uninstall process of a NuGet package could be really useful. Instead of accounting for all possible ways to install your NuGet package for every user, you can simplify the installation by clarifying with the user what they want. This example shows how to generate a windows yes/no message box to get input from the user in the PowerShell install or uninstall script. We’ll use the prompt on the uninstall to confirm if the user wants to delete a custom setting that the initial install placed in their configuration.  Obviously you could use the prompt in any way you want. The objects of the message box are generated similar to the controls in the code behind of a WinForm. At the beginning of your script enter this: param($installPath, $toolsPath, $package, $project)   # Set up path variables $solutionDir = Get-SolutionDir $projectName = (Get-Project).ProjectName $projectPath = Join-Path $solutionDir $projectName   ################################################################################################ # WinForm generation for prompt ################################################################################################ function Ask-Delete-Custom-Settings { [void][reflection.assembly]::loadwithpartialname("System.Windows.Forms") [Void][reflection.assembly]::loadwithpartialname("System.Drawing")   $title = "Package Uninstall" $message = "Delete the customized settings?" #Create form and controls $form1 = New-Object System.Windows.Forms.Form $label1 = New-Object System.Windows.Forms.Label $btnYes = New-Object System.Windows.Forms.Button $btnNo = New-Object System.Windows.Forms.Button   #Set properties of controls and form ############ # label1 # ############ $label1.Location = New-Object System.Drawing.Point(12,9) $label1.Name = "label1" $label1.Size = New-Object System.Drawing.Size(254,17) $label1.TabIndex = 0 $label1.Text = $message   ############# # btnYes # ############# $btnYes.Location = New-Object System.Drawing.Point(156,45) $btnYes.Name = "btnYes" $btnYes.Size = New-Object System.Drawing.Size(48,25) $btnYes.TabIndex = 1 $btnYes.Text = "Yes"   ########### # btnNo # ########### $btnNo.Location = New-Object System.Drawing.Point(210,45) $btnNo.Name = "btnNo" $btnNo.Size = New-Object System.Drawing.Size(48,25) $btnNo.TabIndex = 2 $btnNo.Text = "No"   ########### # form1 # ########### $form1.ClientSize = New-Object System.Drawing.Size(281,86) $form1.Controls.Add($label1) $form1.Controls.Add($btnYes) $form1.Controls.Add($btnNo) $form1.Name = "Form1" $form1.Text = $title #Event Handler $btnYes.add_Click({btnYes_Click}) $btnNo.add_Click({btnNo_Click}) return $form1.ShowDialog() } function btnYes_Click { #6 = Yes $form1.DialogResult = 6 } function btnNo_Click { #7 = No $form1.DialogResult = 7 } ################################################################################################ This has also wired up the click events to the form.  This is all it takes to create the message box. Now we have to actually use the message box and get the user’s response or this is all pointless.  We’ll then delete the section of the application/web configuration called <Custom.Settings> [xml] $configXmlContent = Get-Content $configFile   Write-Host "Please respond to the question in the Dialog Box." $dialogResult = Ask-Delete-Custom-Settings #6 = Yes #7 = No Write-Host "dialogResult = $dialogResult" if ($dialogResult.ToString() -eq "Yes") { Write-Host "Deleting customized settings" $customSettingsNode = $configXmlContent.configuration.Item("Custom.Settings") $configXmlContent.configuration.RemoveChild($customSettingsNode) $configXmlContent.Save($configFile) } if ($dialogResult.ToString() -eq "No") { Write-Host "Do not delete customized settings" } The part where I check if ($dialog.Result.ToString() –eq “Yes”) could just as easily check the value for either 6 or 7 (Yes or No).  I just personally decided I liked this way better.   Shahzad Qureshi is a Software Engineer and Consultant in Salt Lake City, Utah, USA His certifications include: Microsoft Certified System Engineer 3CX Certified Partner Global Information Assurance Certification – Secure Software Programmer – .NET He is the owner of Utah VoIP Store at http://www.utahvoipstore.com/ and SWS Development at http://www.swsdev.com/ and publishes windows apps under the name Blue Voice.

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • 2012 EC Election Ballot open; Meet the Candidates Call tomorrow

    - by heathervc
    The JCP Executive Committee (EC) Election ballot is now open and all of the candidates' nominations materials are now available on JCP.org -- note that two new candidates were nominated late last week:  Liferay and North Sixty-One. It is shaping up to be an exciting election this year! The ratified candidates are:  Cinterion, Credit Suisse, Fujitsu and HP.The elected candidates are (9 candidates, 2 open seats):  Cisco Systems, CloudBees, Giuseppe Dell'Abate, Liferay, London Java Community, MoroccoJUG, North Sixty-One, Software AG, and Zero Turnaround. Tomorrow, 18 October, we will hold an open teleconference for the Java Community to meet the candidates and ask questions regarding their nomination.  We hope you will be able to participate in the call.  Should the time be inconvenient, a recording will be made available for download, and candidate questions may be posted on this blog entry or sent to [email protected]. Topic: Meet the EC Candidates Date: Thursday, October 18, 2012 Time: 9:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 818 225 Meeting Password: MeetEC ------------------------------------------------------- To join the online meeting (Now from mobile devices) ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&RT=MiM0 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: MeetEC 4. Click "Join". To view in other time zones or languages, please click the link: https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&ORT=MiM0 ------------------------------------------------------- To join the audio conference only -------------------------------------------------------     +1 (866) 682-4770     Outside the US: global access numbers  https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803 or +1 (408) 774-4073     Conference code: 9454597     Security code: JCPEC (52732)------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support".

    Read the article

  • Silverlight 4 Twitter Client &ndash; Part 5

    - by Max
    In this post, we will look at implementing the following in our twitter client - displaying the profile picture in the home page after logging in. So to accomplish this, we will need to do the following steps. 1) First of all, let us create another field in our Global variable static class to hold the profile image url. So just create a static string field in that. public static string profileImage { get; set; } 2) In the Login.xaml.cs file, before the line where we redirect to the Home page if the login was successful. Create another WebClient request to fetch details from this url - .xml">http://api.twitter.com/1/users/show/<TwitterUsername>.xml I am interested only in the “profile_image_url” node in this xml string returned. You can choose what ever field you need – the reference to the Twitter API Documentation is here. 3) This particular url does not require any authentication so no need of network credentials for this and also the useDefaultCredentials will be true. So this code would look like. WebRequest.RegisterPrefix("http://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = myService.UseDefaultCredentials = true; myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted_User);) myService.DownloadStringAsync(new Uri("http://api.twitter.com/1/users/show/" + TwitterUsername.Text + ".xml")); 4) To parse the xml file returned by this and get the value of the “profile_image_url” node, the LINQ code will look something like XDocument xdoc; xdoc = XDocument.Parse(e.Result); var answer = (from status in xdoc.Descendants("user") select status.Element("profile_image_url").Value); GlobalVariable.profileImage = answer.First().ToString(); 5) After this, we can then redirect to Home page completing our login formalities. 6) In the home page, add a image tag and give it a name, something like <Image Name="image1" Stretch="Fill" Margin="29,29,0,0" Height="73" VerticalAlignment="Top" HorizontalAlignment="Left" Width="73" /> 7) In the OnNavigated to event in home page, we can set the image source url as below image1.Source = new BitmapImage(new Uri(GlobalVariable.profileImage, UriKind.Absolute)); Hope this helps. If you have any queries / suggestions, please feel free to comment below or contact me. Technorati Tags: Silverlight,LINQ,Twitter API,Twitter

    Read the article

  • PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' undefined symbol: ZVAL_DELREF

    - by crmpicco
    I have an issue where I am unable to use JSON, which would appear to be because of the following error. There is another thread on this forum this touches on a similar issue, but it's not quite the same. I am using CentOS 5.6 and have the following pear packages installed: [crmpicco@eq-www-php53 ~]$ pear list PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.7 stable Auth_SASL 1.0.2 stable Console_Getopt 1.3.1 stable Image_Barcode 1.1.2 stable Mail 1.1.14 stable Net_SMTP 1.2.10 stable Net_Socket 1.0.8 stable PEAR 1.9.4 stable Structures_Graph 1.0.4 stable XML_RPC 1.5.4 stable XML_Util 1.2.1 stable json 1.2.1 stable and have the following PHP packages installed: [crmpicco@eq-www-php53 ~]$ yum list installed | grep php php.x86_64 5.3.10-1.w5 installed php-cli.x86_64 5.3.10-1.w5 installed php-common.x86_64 5.3.10-1.w5 installed php-devel.x86_64 5.3.10-1.w5 installed php-gd.x86_64 5.3.10-1.w5 installed php-ldap.x86_64 5.3.10-1.w5 installed php-mcrypt.x86_64 5.3.10-1.w5 installed php-mysql.x86_64 5.3.10-1.w5 installed php-pdo.x86_64 5.3.10-1.w5 installed php-pear.noarch 1:1.9.4-1.w5 installed php-pear-Net-Socket.noarch 1.0.8-1.el5.centos installed php-soap.x86_64 5.3.10-1.w5 installed php-xml.x86_64 5.3.10-1.w5 installed The error: [crmpicco@eq-www-php53 ~]$ php -v PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/json.so' - /usr/lib64/php/modules/json.so: undefined symbol: ZVAL_DELREF in Unknown on line 0 PHP 5.3.10 (cli) (built: Feb 2 2012 23:23:12) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies My repolist reads as: [crmpicco@eq-www-php53 ~]$ yum repolist Loaded plugins: changelog, fastestmirror Excluding Packages in global exclude list Finished repo id repo name status base CentOS-5 - Base 3,548+43 epel Extra Packages for Enterprise Linux 5 - x86_64 6,815+156 extras CentOS-5 - Extras 245+23 rpmforge Red Hat Enterprise 5 - RPMforge.net - dag 11,016+67 updates CentOS-5 - Updates 233 webtatic Webtatic Repository 5 - x86_64 211+183 repolist: 22,068 I am getting HTTP 500 errors everywhere that I use JSON so my application is non functional right now.

    Read the article

  • Mapping Drive Error - System Error 1808

    - by Julian Easterling
    A vendor is attempting to map and preserve a network drive using nt authority/system; so it stays persistent when the interactive session of the server is lost. They were able to do this on one server (Windows 2008 R2) but not a second computer (also Windows 2008 R2). D:\PsExec.exe -s cmd.exe PsExec v1.98 - Execute processes remotely Copyright (C) 2001-2010 Mark Russinovich Sysinternals - www.sysinternals.com Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. all rights reserved. C:\Windows\system32>whoami nt authority\system C:\Windows\system32>net use New connections will be remembered. Status Local Remote Network -------------------------------------------------------------------- OK X: \\netapp1\share1 Microsoft Windows Network The command completed successfully. C:\Windows\system32>net use q: \\netapp1\share1 System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. C:\Windows\system32> I am unsure on how to set up a "machine account mapping" which will preserve the drive letter of the Netapp path being mapped, so that the service account running a Windows service can continue to access the share after interactive logon has expired on the server. Since they were able to do this on one server but not another, I'm not sure how to troubleshoot the problem? Any suggestions?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >