Search Results

Search found 4242 results on 170 pages for 'mark c'.

Page 142/170 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Graphics card ATI Asus 9250 128 MB AGP problem.Monitor switching off.

    - by Dominick1978
    I have an old system with these specs: Motherboard: Via P4x266a (with AGP 4x) CPU: P4 1,7 Ghz Memory: 1152 MB DDR SDRAM Graphics card: Voodoo 3 2000 16 MB PSU: 300 Watt It also has 1 dvd-rom, 1 dvd-rw, 2 hard drives (all 4 connected via molex) , 1 sounblaster sound card and 1 ethernet card (both connected via pci). OS: XP Pro Recently I bought Asus 9250 128 MB AGP to replace voodoo.Wnen I switch on the pc the initial screens (until after the xp logo) sometimes are distorted with blurred colours.When XP are loaded there is some flickering but the rest are ok.XP can't recognize the card seeing it as just a VGA adapter.I have downloaded the latest xp drivers from ATI website and installed them.Then after the restart everything is ok (no distorted image or blurring) until after the xp logo.After this the monitor turns off while the pc is still running.I have tried many drivers but the problem persists (of course I removed the voodoo drivers before from the display adapter properties).Only once I have managed to enter XP (after changing BIOS features for graphics card from 256 MB to 128 MB) but the drivers on the control panel had an exclamation mark (ati 9250!) and below them ati 9250 secondary without ! under the display adapter tab and the ATI catalyst program said that it couldn;t find the card.That was the opnly time I went beyond the xp logo.Now the monitor auto switcheS off. So, what do you think? 1)Is this a broken card? 2)Is it the drivers of the card? 3)Is it PSU fault? 4)Anything else? Thanks for your help and excuse my english!

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • Notepad and Regedit does not launch from C:\Windows or C:\Windows\System32, but will from C:\Windows\SysWOW64

    - by lordhog
    My work machine was recently updated from Windows XP 64-bit to Windows 7 64-bit. I have found a few odd things that I can not figure out why they are occurring. If I type in Notepad.exe or RegEdit.exe from the Run diaglog the applications will not launch. When executing RegEdit.exe from the Run dialog the UAC does display asking for admin privileges, but the application never launches. Also, if I go to the Start menu and click on Notepad from there, Notepad never launches either. Other programs seem to be okay, like mspaint, wordpad, etc. Now, if I launch Notepad or RegEdit from C:\Windows\SysWOW64 then these applications will run, though I think these are the 32-bit version and not the 64-bit version. I am not sure what is going on. Has anyone seen this behavior before and how to fix it? I have Windows 7 64-bit at home and I don't have this issue. I even copied Notepad.exe from home and place it in the C:\ folder and tried to launch it from there and it still doesn't work. Regards, Mark

    Read the article

  • How to store Movies on a separate volume from the iTunes media folder?

    - by Manca Weeks
    I have a rather enormous Music collection. The music itself is approaching the 1TB mark. I am storing that on an external drive already. My iTunes library files are in their default location (/Users/me/Music/iTunes). My iTunes media folder is on an external drive /Volumes/iTunes/iTunes Music This has been working as expected. Now I would like to store just the contents of the Movies folder in the iTunes media folder on a separate drive. Apparently, iTunes doesn't like aliases or symlinks. I saw somewhere that one could mount a volume in a different directory than the default /Volumes. I would like to permanently mount my new Movies volume in the directory /Volumes/iTunes/iTunes Music/Movies. I know there is a command to do this, but how does one configure Mac OS 10.6.4 to always automatically mount that volume in this directory? I hope someone can enlighten me... If I find a solution, I can finally import all my movies into iTunes and be able to search them and stuff - it would be a dream. Thanks, M

    Read the article

  • Word 2010 header "different first page" option does not work if you select the same built-in header preset?

    - by fredsbend
    I have a three page Word 2010 document. I have set a header on the first page and marked the "different first page" option to make the follow page headers different. It works as expected so long as I don't select the same built-in header preset for the following pages. Here's what I am doing: Check mark different first page. Make the header for the first page using Alphabet preset. Attempt to make the header for the following pages by starting with the same Alphabet preset. I only want to change the text of the following headers but still want the same graphical effects. Click off the header into the body of the document. Upon doing this the headers on the first page are updated to the ones I just made for the following pages. I don't I am doing something wrong because I can choose a different header preset and it will work as expected. If I select the same preset, however, it updates all headers, whether the "different on first page" is selected or not. If this is repeatable on others' computers the I would say it's a word bug. If not then please help me figure out how to get this working right.

    Read the article

  • IOException: Unable To Delete Images Due To File Lock

    - by Arslan Pervaiz
    I am Unable To Delete Image File From My Server Path It Gaves Error That The Process Cannot Access The File "FileName" Because it is being Used By Another Process. I Tried Many Methods But Still All In Vain. Please Help me Out in This Issue. Here is My Code Snippet. using System; using System.Data; using System.Web; using System.Data.SqlClient; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Globalization; using System.Web.Security; using System.Text; using System.DirectoryServices; using System.Collections; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Drawing.Drawing2D; //============ Main Block ================= byte[] data = (byte[])ds.Tables[0].Rows[0][0]; MemoryStream ms = new MemoryStream(data); Image returnImage = Image.FromStream(ms); returnImage.Save(Server.MapPath(".\\TmpImages\\SavedImage.jpg"), System.Drawing.Imaging.ImageFormat.Jpeg); returnImage.Dispose(); \\ I Tried this Dispose Method To Unlock The File But Nothing Done. ms.Close(); \\ I Tried The Memory Stream Close Method Also But Its Also Not Worked For Me. watermark(); \\ Here is My Water Mark Method That Print Water Mark Image on My Saved Image (Image That is Converted From Byte Array) DeleteImages(); \\ Here is My Delete Method That I Call To Delete The Images //===== ==== My Delete Method To Delete Files================== public void DeleteImages() { try { File.Delete(Server.MapPath(".\\TmpImages\\WaterMark.jpg")); \\This Image Deleted Fine. File.Delete(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); \\ Exception Thrown On Deleting of This Image. } catch (Exception ex) { LogManager.LogException(ex, "Error in Deleting Images."); Master.ShowMessage(ex.Message, true); } } \ ==== Method Declartion That Make Watermark of One Image On Another Image.======= public void watermark() { //create a image object containing the photograph to watermark Image imgPhoto = Image.FromFile(Server.MapPath(".\\TmpImages\\SavedImage.jpg")); int phWidth = imgPhoto.Width; int phHeight = imgPhoto.Height; //create a Bitmap the Size of the original photograph Bitmap bmPhoto = new Bitmap(phWidth, phHeight, PixelFormat.Format24bppRgb); bmPhoto.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //load the Bitmap into a Graphics object Graphics grPhoto = Graphics.FromImage(bmPhoto); //create a image object containing the watermark Image imgWatermark = new Bitmap(Server.MapPath(".\\TmpImages\\PrintasWatermark.jpg")); int wmWidth = imgWatermark.Width; int wmHeight = imgWatermark.Height; //Set the rendering quality for this Graphics object grPhoto.SmoothingMode = SmoothingMode.AntiAlias; //Draws the photo Image object at original size to the graphics object. grPhoto.DrawImage( imgPhoto, // Photo Image object new Rectangle(0, 0, phWidth, phHeight), // Rectangle structure 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. phWidth, // Width of the portion of the source image to draw. phHeight, // Height of the portion of the source image to draw. GraphicsUnit.Pixel); // Units of measure //------------------------------------------------------- //to maximize the size of the Copyright message we will //test multiple Font sizes to determine the largest posible //font we can use for the width of the Photograph //define an array of point sizes you would like to consider as possiblities //------------------------------------------------------- //Define the text layout by setting the text alignment to centered StringFormat StrFormat = new StringFormat(); StrFormat.Alignment = StringAlignment.Center; //define a Brush which is semi trasparent black (Alpha set to 153) SolidBrush semiTransBrush2 = new SolidBrush(Color.FromArgb(153, 0, 0, 0)); //define a Brush which is semi trasparent white (Alpha set to 153) SolidBrush semiTransBrush = new SolidBrush(Color.FromArgb(153, 255, 255, 255)); //------------------------------------------------------------ //Step #2 - Insert Watermark image //------------------------------------------------------------ //Create a Bitmap based on the previously modified photograph Bitmap Bitmap bmWatermark = new Bitmap(bmPhoto); bmWatermark.SetResolution(imgPhoto.HorizontalResolution, imgPhoto.VerticalResolution); //Load this Bitmap into a new Graphic Object Graphics grWatermark = Graphics.FromImage(bmWatermark); //To achieve a transulcent watermark we will apply (2) color //manipulations by defineing a ImageAttributes object and //seting (2) of its properties. ImageAttributes imageAttributes = new ImageAttributes(); //The first step in manipulating the watermark image is to replace //the background color with one that is trasparent (Alpha=0, R=0, G=0, B=0) //to do this we will use a Colormap and use this to define a RemapTable ColorMap colorMap = new ColorMap(); //My watermark was defined with a background of 100% Green this will //be the color we search for and replace with transparency colorMap.OldColor = Color.FromArgb(255, 0, 255, 0); colorMap.NewColor = Color.FromArgb(0, 0, 0, 0); ColorMap[] remapTable = { colorMap }; imageAttributes.SetRemapTable(remapTable, ColorAdjustType.Bitmap); //The second color manipulation is used to change the opacity of the //watermark. This is done by applying a 5x5 matrix that contains the //coordinates for the RGBA space. By setting the 3rd row and 3rd column //to 0.3f we achive a level of opacity float[][] colorMatrixElements = { new float[] {1.0f, 0.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 1.0f, 0.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 1.0f, 0.0f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.3f, 0.0f}, new float[] {0.0f, 0.0f, 0.0f, 0.0f, 1.0f}}; ColorMatrix wmColorMatrix = new ColorMatrix(colorMatrixElements); imageAttributes.SetColorMatrix(wmColorMatrix, ColorMatrixFlag.Default, ColorAdjustType.Bitmap); //For this example we will place the watermark in the upper right //hand corner of the photograph. offset down 10 pixels and to the //left 10 pixles int xPosOfWm = ((phWidth - wmWidth) - 10); int yPosOfWm = 10; grWatermark.DrawImage(imgWatermark, new Rectangle(xPosOfWm, yPosOfWm, wmWidth, wmHeight), //Set the detination Position 0, // x-coordinate of the portion of the source image to draw. 0, // y-coordinate of the portion of the source image to draw. wmWidth, // Watermark Width wmHeight, // Watermark Height GraphicsUnit.Pixel, // Unit of measurment imageAttributes); //ImageAttributes Object //Replace the original photgraphs bitmap with the new Bitmap imgPhoto = bmWatermark; grPhoto.Dispose(); grWatermark.Dispose(); //save new image to file system. imgPhoto.Save(Server.MapPath(".\\TmpImages\\WaterMark.jpg"), ImageFormat.Jpeg); imgPhoto.Dispose(); imgWatermark.Dispose(); }

    Read the article

  • PHP / MYSQL: Database empties when I use a variable in the WHERE condition of the last mysql_query

    - by Christian Cugnet
    <?php require 'connect.php'; $search = $_POST["search"]; These two queries work fine. So I used their format for the one below. $result = mysql_query("SELECT * FROM `subjects` WHERE $search = `student_id`"); $result2 = mysql_query("SELECT * FROM `grades` WHERE $search = `student_id`"); while($row = mysql_fetch_array($result)) { $row2 = mysql_fetch_array($result2); echo"<table border='1'>"; echo "<tr>"; echo "<th>Subjects:</th>"; echo "<th>Current Mark:</th>"; echo "<th>Edit Mark:</th>"; echo"</tr>"; echo"<tr>"; echo "<td>". $row['c1'] ."</td>"; echo "<td>". $row2['m1'] ."</td>"; echo "<td><input type='text' name='m1'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c2'] ."</td>"; echo "<td>". $row2['m2'] ."</td>"; echo "<td><input type='text' name='m2'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c3'] ."</td>"; echo "<td>". $row2['m3'] ."</td>"; echo "<td><input type='text' name='m3'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c4'] ."</td>"; echo "<td>". $row2['m4'] ."</td>"; echo "<td><input type='text' name='m4'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c5'] ."</td>"; echo "<td>". $row2['m5'] ."</td>"; echo "<td><input type='text' name='m5'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c6'] ."</td>"; echo "<td>". $row2['m6'] ."</td>"; echo "<td><input type='text' name='m6'></td>"; echo "</tr>"; echo "<tr>"; echo "<td>". $row['c7'] ."</td>"; echo "<td>". $row2['m7'] ."</td>"; echo "<td><input type='text' name='m7'></td>"; echo "</tr>"; echo "</table>"; echo "<input type='submit' name='submit' value='Submit'>"; echo "</form>"; } $M1 = $_POST["m1"]; $M2 = $_POST["m2"]; $M3 = $_POST["m3"]; $M4 = $_POST["m4"]; $M5 = $_POST["m5"]; $M6 = $_POST["m6"]; $M7 = $_POST["m7"]; It works if I put numbers e.x. 11111 Otherwise it just enters blank spaces into the table. I've tried '".$search."' I've tried ".$search." mysql_query("UPDATE grades SET m1 = '$M1', m2 = '$M2',m3 = '$M3',m4 = '$M4',m5 = '$M5',m6 = '$M6',m7 = '$M7' WHERE $search = `student_id`"); ?> Table +------------+---+---+---+---+---+---+---+ |student_id|m1|m2|m3|m4|m5|m6|m7| +------------+---+---+---+---+---+---+---+ ===Database d1 == Table structure for table grades |------ |Column|Type|Null|Default |------ |//student_id//|int(5)|No| |m1|text|No| |m2|text|No| |m3|text|No| |m4|text|No| |m5|text|No| |m6|text|No| |m7|text|No| == Dumping data for table grades |11111| | | | | | | |11112|fg|fd|f|f|fd|f|f ===Database d1 == Table structure for table subjects |------ |Column|Type|Null|Default |------ |//student_id//|int(11)|No| |c1|text|No| |c2|text|No| |c3|text|No| |c4|text|No| |c5|text|No| |c6|text|No| |c7|text|No| == Dumping data for table subjects |11111|English|Math|Science|Sport|IT|Art|History |11112|grdgg|vsbvbbb|bdbbrfd|bdbrb|dbrbfbf|fbdfbdbf|dbfbdfb

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface I've run setup-pstn which produces the following output trixbox1.localdomain ~]# setup-pstn -------------------------------------------------------------- Detecting PSTN cards and USB PSTN Devices -------------------------------------------------------------- Hardware present! STOPPING ASTERISK Asterisk Stopped STOPPING FOP SERVER FOP Server Stopped Unloading DAHDI hardware modules: done Loading DAHDI hardware modules: wct4xxp: [ OK ] wcte12xp: [ OK ] wct1xxp: [ OK ] wcte11xp: [ OK ] wctdm24xxp: [ OK ] opvxa1200: [ OK ] wcfxo: [ OK ] wctdm: [ OK ] wcb4xxp: [ OK ] wctc4xxp: [ OK ] xpp_usb: [ OK ] Running dahdi_cfg: [ OK ] SETTING FILE PERMISSIONS Permissions OK STARTING ASTERISK Asterisk Started STARTING FOP SERVER FOP Server Started Chan Extension Context Language MOH Interpret Blocked State pseudo default en default In Service 1 from-pstn en default In Service dahdi_scan returns: dahdi_scan [1] active=yes alarms=OK description=Wildcard TDM400P REV I Board 5 name=WCTDM/4 manufacturer=Digium devicetype=Wildcard TDM400P REV I location=PCI Bus 00 Slot 10 basechan=1 totchans=4 irq=209 type=analog port=1,FXO port=2,none port=3,none port=4,none And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook A cat of /etc/asterisk/dahdi.conf shows: [trixbox1.localdomain ~]# cat /etc/asterisk/dahdi-channels.conf ; Autogenerated by /usr/sbin/dahdi_genconf on Tue May 25 17:45:13 2010 ; If you edit this file and execute /usr/sbin/dahdi_genconf again, ; your manual changes will be LOST. ; Dahdi Channels Configurations (chan_dahdi.conf) ; ; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended ; to be #include-d by /etc/chan_dahdi.conf that will include the global settings ; ; Span 1: WCTDM/4 "Wildcard TDM400P REV I Board 5" (MASTER) ;;; line="1 WCTDM/4/0 FXSKS (SWEC: MG2)" signalling=fxs_ks callerid=asreceived group=0 context=from-pstn channel => 1 callerid= group= context=default I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". I have one outbound route which uses the dial pattern 9|. and the Trunk Zap/1 and one Zap Trunk which uses Zap Identifier (trunk name): 1 and has no Dial Rules. The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. When running tail -f /var/log/asterisk/full and placing a call I get the following output: [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP TOS bits 184 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP CoS mark 5 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP TOS bits 136 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP CoS mark 6 [May 26 11:10:52] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:1] Macro("SIP/801-b7ce8c28", "user-callerid,SKIPTTL,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:1] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:2] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:3] ExecIf("SIP/801-b7ce8c28", "1?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:4] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:5] Set("SIP/801-b7ce8c28", "AMPUSERCIDNAME=Jona") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:6] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:7] Set("SIP/801-b7ce8c28", "AMPUSERCID=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:8] Set("SIP/801-b7ce8c28", "CALLERID(all)="Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:9] Set("SIP/801-b7ce8c28", "REALCALLERIDNUM=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:10] ExecIf("SIP/801-b7ce8c28", "0?Set(CHANNEL(language)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:11] GotoIf("SIP/801-b7ce8c28", "1?continue") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-user-callerid,s,20) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:20] NoOp("SIP/801-b7ce8c28", "Using CallerID "Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:2] Set("SIP/801-b7ce8c28", "_NODEST=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:3] Macro("SIP/801-b7ce8c28", "record-enable,801,OUT,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:1] GotoIf("SIP/801-b7ce8c28", "1?check") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-record-enable,s,4) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:4] AGI("SIP/801-b7ce8c28", "recordingcheck,20100526-111052,1274868652.1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Launched AGI Script /var/lib/asterisk/agi-bin/recordingcheck [May 26 11:10:52] VERBOSE[2858] logger.c: recordingcheck,20100526-111052,1274868652.1: Outbound recording not enabled [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28>AGI Script recordingcheck completed, returning 0 [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:5] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:4] Macro("SIP/801-b7ce8c28", "dialout-trunk,1,01483890915,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:1] Set("SIP/801-b7ce8c28", "DIAL_TRUNK=1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:2] GosubIf("SIP/801-b7ce8c28", "0?sub-pincheck,s,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:3] GotoIf("SIP/801-b7ce8c28", "0?disabletrunk,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:4] Set("SIP/801-b7ce8c28", "DIAL_NUMBER=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:5] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=tr") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:6] Set("SIP/801-b7ce8c28", "OUTBOUND_GROUP=OUT_1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:7] GotoIf("SIP/801-b7ce8c28", "1?nomax") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s,9) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:9] GotoIf("SIP/801-b7ce8c28", "0?skipoutcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:10] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:11] Macro("SIP/801-b7ce8c28", "outbound-callerid,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:1] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:2] ExecIf("SIP/801-b7ce8c28", "0?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:3] GotoIf("SIP/801-b7ce8c28", "1?normcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,6) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:6] Set("SIP/801-b7ce8c28", "USEROUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:7] Set("SIP/801-b7ce8c28", "EMERGENCYCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:8] Set("SIP/801-b7ce8c28", "TRUNKOUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:9] GotoIf("SIP/801-b7ce8c28", "1?trunkcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,12) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:12] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:13] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:14] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=prohib_passed_screen)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:12] ExecIf("SIP/801-b7ce8c28", "0?AGI(fixlocalprefix)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:13] Set("SIP/801-b7ce8c28", "OUTNUM=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:14] Set("SIP/801-b7ce8c28", "custom=DAHDI/1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:15] ExecIf("SIP/801-b7ce8c28", "0?Set(DIAL_TRUNK_OPTIONS=M(setmusic^))") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:16] Macro("SIP/801-b7ce8c28", "dialout-trunk-predial-hook,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk-predial-hook:1] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:17] GotoIf("SIP/801-b7ce8c28", "0?bypass,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:18] GotoIf("SIP/801-b7ce8c28", "0?customtrunk") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:19] Dial("SIP/801-b7ce8c28", "DAHDI/1/01483890915,300,") in new stack [May 26 11:10:52] WARNING[2858] app_dial.c: Unable to create channel of type 'DAHDI' (cause 0 - Unknown) [May 26 11:10:52] VERBOSE[2858] logger.c: == Everyone is busy/congested at this time (1:0/0/1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:20] Goto("SIP/801-b7ce8c28", "s-CHANUNAVAIL,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:1] GotoIf("SIP/801-b7ce8c28", "1?noreport") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,3) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:3] NoOp("SIP/801-b7ce8c28", "TRUNK Dial failed due to CHANUNAVAIL (hangupcause: 0) - failing through to other trunks") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:5] Macro("SIP/801-b7ce8c28", "outisbusy,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:1] Playback("SIP/801-b7ce8c28", "all-circuits-busy-now,noanswer") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'all-circuits-busy-now.ulaw' (language 'en') [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:2] Playback("SIP/801-b7ce8c28", "pls-try-call-later,noanswer") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'pls-try-call-later.ulaw' (language 'en') [May 26 11:10:54] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (macro-outisbusy, s, 2) exited non-zero on 'SIP/801-b7ce8c28' in macro 'outisbusy' [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (from-internal, 901483890915, 5) exited non-zero on 'SIP/801-b7ce8c28' [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [h@from-internal:1] Macro("SIP/801-b7ce8c28", "hangupcall") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:1] ResetCDR("SIP/801-b7ce8c28", "vw") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:2] NoCDR("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:3] GotoIf("SIP/801-b7ce8c28", "1?skiprg") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,6) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:6] GotoIf("SIP/801-b7ce8c28", "1?skipblkvm") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,9) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:9] GotoIf("SIP/801-b7ce8c28", "1?theend") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,11) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:11] Hangup("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (macro-hangupcall, s, 11) exited non-zero on 'SIP/801-b7ce8c28' in macro 'hangupcall' [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (from-internal, h, 1) exited non-zero on 'SIP/801-b7ce8c28' I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • Search Alternative Search Engines from within Bing’s Search Page

    - by Asian Angel
    So you love using Bing Search but may still be curious to see what another search engine will provide if used. Now you can search using another search engine from within the Bing Search page and enjoy numbered results using two simple user scripts. Note: These user scripts may also be added to other browsers as well (i.e. Iron, Opera, etc.). Before Bing Search does nicely on searches but what if you would like to try the same search with another search engine? Having to manually open a new tab, navigate to the appropriate website, and then start a new search is not too convenient. Another possible frustration for some people may be knowing just how many search results that they have looked through. Well, both of these small problems are easy to fix with two wonderful user scripts. Installing the Scripts The first script that we installed (you may do either one first) was for adding alternative search engine links. Click “Install” to get started… Note: For our example we had the Greasemonkey extension installed. When the confirmation window pops up click on “Install” to finish adding the user script to Firefox. Repeating the same procedure as above add your second script to Firefox. Confirm the second user script installation and you are ready to enjoy nicer Bing Search results. After As you can see there are two small unobtrusive differences in our search results. The alternative search engine links are conveniently located at the top of the page and now you can easily know just how many search results that you have looked through. The results when we decided to try the search in a transfer over to Yahoo. Our search transferred to Ask Search. The alternative search links can be very helpful if Bing is not providing the kind of search results that you are hoping for. Still going very nicely past the 100 mark… Conclusion If you have been wanting a small booster to searching with Bing then these two scripts will get you on your way. Using Opera Browser? See our how-to for adding user scripts to Opera here. Links Install the Bing (Alternate Search Engine Links) User Script Install the Bing Numbered Search Results User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Download the Stylish extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Organize Your Firefox Search Engines Into FoldersFix for Slow "Instant Search" In Outlook 2007Gain Access to a Search Box in Google ChromeManage Web Searches In SafariModify Firefox’s Search Bar Behavior with SearchLoad Options TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Developer Training – Importance and Significance – Part 1

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Beginning in Real World However, many former-students will be disappointed to find out that once they become employees, learning is not over.  Many companies are discovering the importance and benefits to training their employees.  You can breathe a sigh of relief, though, because much for this kind of training there are not usually tests! We often think that we go to school for our younger years so that we do all our learning all at once, and then for the rest of our lives we use that knowledge.  But in so many cases, but especially for developers, the opposite is true.  It takes many years of schools to learn the basics of a field, and then our careers are spent learning to become experts. For this, and so many other reasons, training is very important.  Example one: developer training leads to better employees.  A company is only as good as the people it employs, and one way to ensure that you have employed the right candidate is through training.  Training can take a regular “stone” and polish it into a “diamond.”  Employees who have been well-trained will be better at their jobs and produce a better product. Most Expensive Resource Did you know that one of the most expensive operating costs for any company is not buying goods, or advertising, but its employees – especially having to hire new employees.  Bringing in new people, getting them up to speed, and providing them with perks to attract them to a company is a huge cost for companies.  So employee retention – keep the employees you already have, and keeping them happy – is incredibly important from a business aspect.  And research shows that a well-trained employee is a happy employee.  They feel more confident in their job, happier with their position, and more cared-about – and therefore less likely to leave in search of a better job.  Employee training leads to better retention. Good Moral On the subject of keeping employees happy in order to keep them at a company, the complement to that research shows that happier employees are more efficient and overall better at their jobs.  You don’t have to be a scientist to figure out why this is true.  An employee who feel that his company cares about him and his educational future will work harder for the company.  He or she will put in that extra hour during the busy season that makes all the difference in the end.  Good morale is good for the company. If good morale is better for the company, you know that it goes hand-in-hand with something even better – better efficiency.  An employee who is well trained obviously knows more about their job and all the technical aspects.  That means when a problem crops up – and they inevitably do – this employee will be well-equipped to deal with that problem with fewer problems, and no need to go searching for help from higher up.  When employees are well trained, companies run more smoothly. A Better Product Of course, all of these “pros” for employee training are leading up to the one thing that companies truly care about – a better product.  We have shown that employees who have been trained to be competitive in the market are happier at the company, they are more efficient, and their morale is better.  The overall result is that the company’s product – whether it is a database, piece of equipment, or even a physical good – is better.  And a better product will always be more competitive on the market. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • The “Customer” Experience Revolution is Here

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Anthony Lye, SVP, Oracle Development The Experience Revolution is here, and we are going to explore and celebrate our new customer experience ventures and strategy in an extraordinary way. In true Oracle fashion, we are hosting an exceptional event, bringing together customer experience advocates, visionaries and practitioners to discover and define Oracle’s Customer Experience vision. The Experience Revolution is best described as today’s era of the empowered consumer. For those of us who work with customers on a daily basis, we know that the modern consumer demands fast, accurate, consistent information across all communication channels. And if they don’t like the services received can easily take to social channels to voice disapproval. For this reason, organizations today operate in an environment where traditional methods of differentiation are less effective and customer experience has become the primary driver of business value. Here’s some food for thought, according to the 2011 Customer Experience Impact (CEI) Report, a full 89 percent of consumers will switch brands for a better customer experience. In short, in today’s era of the empowered consumer, delivering excellent customer experiences is what will, and is, defining the next great brands. At The Experience Revolution, Oracle President Mark Hurd will detail the vision of where customer experience is going and how Oracle will help you get there. He will introduce for the first time Oracle Customer Experience, a cross stack suite of customer experience products that enable organizations to: Engage customers with a consistent, connected and personalized brand experience across all channels and devices Deliver exceptional cross-channel order fulfillment and customer service through web, call centers and social networks Connect and analyze data from all interactions to better personalize experiences and identify hidden opportunities The Experience Revolution will also include an interactive gallery of customer experience interactions, featuring videos, touch screens and near field communication technology that will guide each attendee through an individualized event experience. We hope you will join us for an incredible evening on June 25, from 6:00 – 9:00 p.m. at Gotham Hall in New York City. You can register for The Experience Revolution here. And if you haven’t already joined the conversation on Twitter, please do: #OracleCX, #ExperienceRevolution

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Windows Phone appointment task

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2014/08/10/windows-phone-appointment-task.aspxI am currently working on a new version of my AgeInDays app for Windows Phone. This app calculates how old you are in days (or weeks, depending on your preferences). The inspiration for this app came from my father, who once told me he proposed to my mother when she was 1000 weeks old. That left me wondering: how old in weeks or days am I? And being the geek I am, I wrote an app for it. If you have a Windows Phone, you can find it at http://www.windowsphone.com/en-in/store/app/age-in-days/7ed03603-0e00-4214-ad04-ce56773e5dab A new version of the app was published quite quickly, adding the possibility to mark a date in your agenda when you would have reached a certain age. Of course the logic behind this if extremely simple. Just take a DateTime, populate it with the given date from the DatePicker, then call AddDays(numDays) and voila, you have the date. Now all I had to do was implement a way to store this in the users calendar so he would get a reminder when that date occurred. Luckily, the Windows Phone SDK makes that extremely simple: public void PublishTask(DateTime occuranceDate, string message) { var task = new SaveAppointmentTask() { StartTime = occuranceDate, EndTime = occuranceDate, Subject = message, Location = string.Empty, IsAllDayEvent = true, Reminder = Reminder.None, AppointmentStatus = AppointmentStatus.Free };   task.Show(); }  And that's it. Whenever I call the PublishTask Method an appointment will be made and put in the calendar. Well, not exactly: a template will be made for that appointment and the user will see that template, giving him the option to either discard or save the reminder. The user can also make changes before submitting this to the calendar: it would be useful to be able to change the text in the agenda and that's exactly what this allows you to do. Now, see at the bottom of the screen the option "Occurs". This tiny field is what this post is about. You cannot set it from the code. I want to be able to have repeating items in my agenda. Say for instance you're counting down to a certain date, I want to be able to give you that option as well. However, I cannot. The field "occurs" is not part of the Task you create in code. Of course, you could create a whole series of events yourself. Have the "Occurs" field in your own user interface and make all the appointments. But that's not the same. First, the system doesn't recognize them as part of a series. That means if you want to change the text later on on one of the occurrences it will not ask you if you want to open this one or the whole series. More important however, is that the user has to acknowledge each and every single occurrence and save that into the agenda. Now, I understand why they implemented the system in such a way that the user has to approve an entry. You don't want apps to automatically fill your agenda with messages such as "Remember to pay for my app!". But why not include the "Occurs" option? The user can still opt out if they see this happening. I hope an update will fix this soon. But for now: you just have to countdown to your birthday yourself. My app won't support this.

    Read the article

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • Getting Started with Chart control in ASP.Net 4.0

    - by sreejukg
    In this article I am going to demonstrate the Chart control available in ASP.Net 4 and Visual Studio 2010. Most of the web applications need to generate reports for business users. The business users are happy to view the results in a graphical format more that seeing it in numbers. For the purpose of this demonstration, I have created a sales table. I am going to create charts from this sale data. The sale table looks as follows I have created an ASP.Net web application project in Visual Studio 2010. I have a default.aspx page that I am going to use for the demonstration. First I am going to add a chart control to the page. Visual Studio 2010 has a chart control. The Chart Control comes under the Data Tab in the toolbox. Drag and drop the Chart control to the default.aspx page. Visual Studio adds the below markup to the page. <asp:Chart ID="Chart1" runat="server"></asp:Chart> In the designer view, the Chart controls gives the following output. As you can see this is exactly similar to other server controls in ASP.Net, and similar to other controls under the data tab, Chart control is also a data bound control. So I am going to bind this with my sales data. From the design view, right click the chart control and select “show smart tag” Here you need so choose the Data source property and the chart type. From the choose data source drop down, select new data source. In the data source configuration wizard, select the SQL data base and write the query to retrieve the data. At first I am going to show the chart for amount of sales done by each sales person. I am going to use the following query inside sqldatasource select command. “SELECT SUM(SaleAmount) AS Expr1, salesperson FROM SalesData GROUP BY SalesPerson” This query will give me the amount of sales achieved by each sales person. The mark up of SQLDataSource is as follows. <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SampleConnectionString %>" SelectCommand="SELECT SUM(SaleAmount) as amount, SalesPerson FROM SalesData GROUP BY SalesPerson"></asp:SqlDataSource> Once you selected the data source for the chart control, you need to select the X and Y values for the columns. I have entered salesperson in the X Value member and amount in the Y value member. After modifications, the Chart control looks as follows Click F5 to run the application. The output of the page is as follows. Using ASP.Net it is much easier to represent your data in graphical format. To show this chart, I didn’t even write any single line of code. The chart control is a great tool that helps the developer to show the business intelligence in their applications without using third party products. I will write another blog that explore further possibilities that shows more reports by using the same sales data. If you want to get the Project in zipped format, post your email below.

    Read the article

  • Silverlight Cream for January 26, 2011 -- #1036

    - by Dave Campbell
    In this all-submittal Issue: XamlNinja, Kevin Dockx, Steve Wortham, Andrea Boschin, Mick Norman, Colin Eberhardt, and Rudi Grobler(-2-, -3-, -4-, -5-). Above the Fold: Silverlight: "Getting an invalid cross-thread exception in Silverlight?" Kevin Dockx WP7: "WP7 Contrib – the last messenger" XamlNinja ISO: "How many files are too many files for isolated storage?" Mick Norman Shoutouts: Telerik announced a free WP7 Webinars series that you probably don't want to miss: Join Us for the Special Free Windows Phone 7 Webinars Series. Guest lecturers - Shawn Wildermuth and Mark Arteaga From SilverlightCream.com: WP7 Contrib – the last messenger XamlNinja has a great post up extending Laurent's IMessenger to deal with a tricky issue of trying to fire a message from one VM to another even if the 2nd VM isn't alive yet... oh, and this is in WP7Contrib, so go grab it! Getting an invalid cross-thread exception in Silverlight? Kevin Dockx has a solution to a problem we've all had... the 'invalid cross-thread exception' ... and the solution is even for those of us trying to do this in a VM... cool and easy solution, Kevin! Mastering Storyboards One Mistake at a Time Steve Wortham is back with a tutorial with a great title :) ... check out the progression from one success to another in this picture/title viewer ... don't miss the very end where he has the control rolled up into a CaptionedImageHyperlink, and a link to download it! Windows Phone 7 - Part #2: Your First Application Andrea Boschin has part 2 of his SilverlightShow WP7 series up. Lots of good intro material here on the manifest file and app.xaml ... he even gets into the ApplicationBar, phone orientation, and the Metro theme. How many files are too many files for isolated storage? Mick Norman alerted me to his blog early this morning, and this is his latest post... interesting tests of how many files are too many for ISO on your WP7... and I have to admit... he's stuffing a boatload of them out there in these tests! ... great info Mick! and thanks for the links. A Navigator Control For Visiblox Time Series Charts Colin Eberhardt's latest post is about creating an interactive navigator for large time series datasets in Visiblox charts.... check the images at the top of the post, and it'll be obvious :) ... very cool stuff. MVVM Frameworks with WP7 support Rudi Grobler has been very busy and if you check the dates, these posts are all in a day or two! This first highlights two contenders for MVVM on WP7: Caliburn and MVVMLight... both well-supported... quick intro to each followed by good links out to the author's sites Reading barcodes from your WP7 device Rudi Grobler also has a cool post up on reading barcodes with your WP7... he's using the ZXing Barcode Scanning Library, and makes quick work of the job. Taking Sterling for a Test-Drive Rudi Grobler has a quick intro to Sterlink, Jeremy Likness' ISO database for Silverlight up... quickly taking care of writing and reading back data. SQLite on WP7 After his discussion of Sterling, Rudi Grobler is now demonstrating the use of SQLite that has been ported to WP7. Check out his demo code... looks pretty easy to use. Hacking the WP7 Camera (The basics) Rudi Grobler's latest post is on getting direct access to the camera on WP7... be sure to do all the downloads and check out the external links he has. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Should I upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" and what care do I need to take if I upgrade?

    - by PHPLover
    I'm basically a Web Developer(PHP Developer) by profession. I mainly work on PHP, jQuery, AJAX, Smarty, HTML and CSS, Bootstrap front-end web development framework. I've also installed and using IDEs/editors like Sublime Text, NetBeans. I'm also using Git repository for my website development as a versioning tool. I'm using "Ubuntu 12.04 LTS" on my machine almost since last two years. My machine configuraion is as follows: Memory : 3.7 GiB Processor : Intel® Core™ i3 CPU M 370 @ 2.40GHz × 4 Graphics : Unknown OS type : 64-bit Disk : 64-bit The important softwares present on my machine and which I'm using daily for my work are as follows: PHP : PHP 5.3.10-1ubuntu3.13 with Suhosin-Patch (cli) (built: Jul 7 2014 18:54:55) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apache web server : /usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted) Server version: Apache/2.2.22 (Ubuntu) Server built: Jul 22 2014 14:35:25 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.3.12 Compiled using: APR 1.4.6, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/apache2/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" MySQL : 5.5.38-0ubuntu0.12.04.1 Smarty : 2.6.18 **NetBeans :** NetBeans IDE 8.0 (Build 201403101706) Sublime Text 2 : Version 2.0.2, Build 2221 Yesterday suddenly a pop-up message appeared on my screen asking me to upgrade to "Ubuntu 14.04 'Trusty Tahr'". I'd also be very happy to upgrade my system to "Ubuntu 14.04 'Trusty Tahr'". Following are the issues about which I'm little bit scared about and I need you all talented people's expert advice/help/suggestions on it: Will upgrading to "Ubuntu 14.04 'Trusty Tahr'" affect the softwares I mentioned above? I mean will I need to re-install/un-install and install these softwares too? Do I really need to and is it really a worth to upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" now? If I upgrade to "Ubuntu 14.04 'Trusty Tahr'" what advantage I'll get from web developer's point of view? Will the upgrade be hassle free and will I be ablr to continue my on-going work without any difficulties? Is "Ubuntu 14.04 'Trusty Tahr'" a LTS version and if yes till when it's going to provide support? These are the five crucial queries I have. If you want any further explanation from me please feel free to ask me. Thanks for spending some of your vaulable time in reading and understanding my issue. Any kind of help/comment/suggestion/answer would be highly appreciated. Though if someone gives canonical, precise and up to the mark answer, it will be of great help to me as well as other web developers using Ubuntu around the world. Once again thank you so much you great people around the globe. Waiting for your precious replies.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >