Search Results

Search found 42545 results on 1702 pages for 'confirm alert on browser close event'.

Page 344/1702 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Using multiple column layout with HTML 5 and CSS 3

    - by nikolaosk
    This is going to be the fourth post in a series of posts regarding HTML 5. You can find the other posts here , here and here.In this post I will provide a hands-on example with HTML 5 and CSS 3 on how to create a page with multiple columns and proper layout.I will show you how to use CSS 3 to create columns much easier than relying on DIV elements and the float CSS rule.I will also show you how to use browser-specific prefix rules (-ms for Internet Explorer and -moz for Firefox ) for browsers that do not fully support CSS 3.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I will create a simple page with information about HTML 5, CSS 3 and JQuery. This is the full HTML 5 code. <!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>    <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>    <div id="main">      <div id="mainnews">        <div>          <h2>HTML 5</h2>        </div>        <div>          <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>          <div class="quote">            <h4>Do More with Less</h4>            <p>             jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development.             </p>            </div>          <p>            The HTML5 test(html5test.com) score is an indication of how well your browser supports the upcoming HTML5 standard and related specifications. Even though the specification isn't finalized yet, all major browser manufacturers are making sure their browser is ready for the future. Find out which parts of HTML5 are already supported by your browser today and compare the results with other browsers.                      The HTML5 test does not try to test all of the new features offered by HTML5, nor does it try to test the functionality of each feature it does detect. Despite these shortcomings we hope that by quantifying the level of support users and web developers will get an idea of how hard the browser manufacturers work on improving their browsers and the web as a development platform.</p>        </div>      </div>              <div id="CSS">        <div>          <h2>CSS 3 Intro</h2>        </div>        <div>          <p>          Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation semantics (the look and formatting) of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can also be applied to any kind of XML document, including plain XML, SVG and XUL.          </p>        </div>      </div>            <div id="CSSmore">        <div>          <h2>CSS 3 Purpose</h2>        </div>        <div>          <p>            CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the layout, colors, and fonts.[1] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design).          </p>        </div>      </div>                </div>    <div id="footer">        <p>Feel free to google more about the subject</p>      </div>     </body>  </html>  The markup is very easy to follow. I have used some HTML 5 tags and the relevant HTML 5 doctype.The CSS code (style.css) follows  body{        line-height: 30px;        width: 1024px;        background-color:#eee;      }            p{        font-size:17px;    font-family:"Comic Sans MS"      }      p,h2,h3,h4{        margin: 0 0 20px 0;      }            #main, #header, #footer{        width: 100%;        margin: 0px auto;        display:block;      }            #header{        text-align: center;         border-bottom: 1px solid #000;         margin-bottom: 30px;      }            #footer{        text-align: center;         border-top: 1px solid #000;         margin-bottom: 30px;      }            .quote{        width: 200px;       margin-left: 10px;       padding: 5px;       float: right;       border: 2px solid #000;       background-color:#F9ACAE;      }            .quote :last-child{        margin-bottom: 0;      }            #main{        column-count:2;        column-gap:20px;        column-rule: 1px solid #000;        -moz-column-count: 2;        -webkit-column-count: 2;        -moz-column-gap: 20px;        -webkit-column-gap: 20px;        -moz-column-rule: 1px solid #000;        -webkit-column-rule: 1px solid #000;      }       All the rules in the css code are pretty simple. The layout is achieved with that CSS rule #main{        column-count:2;        column-gap:20px;        column-rule: 1px solid #000;        -moz-column-count: 2;        -webkit-column-count: 2;        -moz-column-gap: 20px;        -webkit-column-gap: 20px;        -moz-column-rule: 1px solid #000;        -webkit-column-rule: 1px solid #000; Do note the column-count,column-gap and column-rule properties. These properties make the two column layout possible.Please have a look at the picture below to see why I used prefixes for Chrome (webkit) and Firefox(moz).It clearly indicates that the CSS 3 column layout are not supported from Firefox and Chrome.   Finally I test my simple HTML 5 page using the latest versions of Firefox,Internet Explorer and Chrome. In my machine I have installed Firefox 15.0.1.Have a look at the picture below to see how the page looks  I have installed Google Chrome 21.0 in my machine.Have a look at the picture below to see how the page looks Have a look at the picture below to see how my page looks in IE 10.  My page looks the same in all browsers. Hope it helps!!!

    Read the article

  • add a from to backup routine

    - by Gerard Flynn
    hi how do you put a process bar and button onto this code i have class and want to add a gui on to the code using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Data.SqlClient; using System.IO; using System.Threading; using Tamir.SharpSsh; using System.Security.Cryptography; using ICSharpCode.SharpZipLib.Checksums; using ICSharpCode.SharpZipLib.Zip; using ICSharpCode.SharpZipLib.GZip; namespace backup { public partial class Form1 : Form { public Form1() { InitializeComponent(); } /// <summary> /// Summary description for Class1. /// </summary> public class Backup { private string dbName; private string dbUsername; private string dbPassword; private static string baseDir; private string backupName; private static bool isBackup; private string keyString; private string ivString; private string[] backupDirs = new string[0]; private string[] excludeDirs = new string[0]; private ZipOutputStream zipOutputStream; private string backupFile; private string zipFile; private string encryptedFile; static void Main() { Backup.Log("BackupUtility loaded"); try { new Backup(); if (!isBackup) MessageBox.Show("Restore complete"); } catch (Exception e) { Backup.Log(e.ToString()); if (!isBackup) MessageBox.Show("Error restoring!\r\n" + e.Message); } } private void LoadAppSettings() { this.backupName = System.Configuration.ConfigurationSettings.AppSettings["BackupName"].ToString(); this.dbName = System.Configuration.ConfigurationSettings.AppSettings["DBName"].ToString(); this.dbUsername = System.Configuration.ConfigurationSettings.AppSettings["DBUsername"].ToString(); this.dbPassword = System.Configuration.ConfigurationSettings.AppSettings["DBPassword"].ToString(); //default to using where we are executing this assembly from Backup.baseDir = System.Reflection.Assembly.GetExecutingAssembly().Location.Substring(0, System.Reflection.Assembly.GetExecutingAssembly().Location.LastIndexOf("\\")) + "\\"; Backup.isBackup = bool.Parse(System.Configuration.ConfigurationSettings.AppSettings["IsBackup"].ToString()); this.keyString = System.Configuration.ConfigurationSettings.AppSettings["KeyString"].ToString(); this.ivString = System.Configuration.ConfigurationSettings.AppSettings["IVString"].ToString(); this.backupDirs = GetSetting("BackupDirs", ','); this.excludeDirs = GetSetting("ExcludeDirs", ','); } private string[] GetSetting(string settingName, char delimiter) { if (System.Configuration.ConfigurationSettings.AppSettings[settingName] != null) { string settingVal = System.Configuration.ConfigurationSettings.AppSettings[settingName].ToString(); if (settingVal.Length > 0) return settingVal.Split(delimiter); } return new string[0]; } public Backup() { this.LoadAppSettings(); if (isBackup) this.DoBackup(); else this.DoRestore(); Log("Finished"); } private void DoRestore() { System.Windows.Forms.OpenFileDialog fileDialog = new System.Windows.Forms.OpenFileDialog(); fileDialog.Title = "Choose .encrypted file"; fileDialog.Filter = "Encrypted files (*.encrypted)|*.encrypted|All files (*.*)|*.*"; fileDialog.InitialDirectory = Backup.baseDir; if (fileDialog.ShowDialog() == System.Windows.Forms.DialogResult.OK) { //string encryptedFile = GetFileName("encrypted"); string encryptedFile = fileDialog.FileName; string decryptedFile = this.GetDecryptedFilename(encryptedFile); //string originalFile = GetFileName("original"); this.Decrypt(encryptedFile, decryptedFile); //this.UnzipFile(decryptedFile, originalFile); } } //use the same filename as the backup except replace ".encrypted" with ".decrypted.zip" private string GetDecryptedFilename(string encryptedFile) { string name = encryptedFile.Substring(0, encryptedFile.LastIndexOf(".")); name += ".decrypted.zip"; return name; } private void DoBackup() { this.backupFile = GetFileName("bak"); this.zipFile = GetFileName("zip"); this.encryptedFile = GetFileName("encrypted"); this.DeleteFiles(); this.zipOutputStream = new ZipOutputStream(File.Create(zipFile)); try { //backup database first if (this.dbName.Length > 0) { this.BackupDB(backupFile); this.ZipFile(backupFile, this.GetName(backupFile)); } //zip any directories specified in config file this.ZipUserSpecifiedFilesAndDirectories(this.backupDirs); } finally { this.zipOutputStream.Finish(); this.zipOutputStream.Close(); } this.Encrypt(zipFile, encryptedFile); this.SCPFile(encryptedFile); this.DeleteFiles(); } /// <summary> /// Deletes any files created by the backup process, namely the DB backup file, /// the zip of all files backuped up, and the encrypred zip file /// </summary> private void DeleteFiles() { File.Delete(this.backupFile); File.Delete(this.zipFile); ///File.Delete(this.encryptedFile); } private void ZipUserSpecifiedFilesAndDirectories(string[] fileNames) { foreach (string fileName in fileNames) { string name = fileName.Trim(); if (name.Length > 0) { Log("Zipping " + name); this.ZipFile(name, this.GetNameFromDir(name)); } } } private void SCPFile(string inputPath) { string sshServer = System.Configuration.ConfigurationSettings.AppSettings["SSHServer"].ToString(); string sshUsername = System.Configuration.ConfigurationSettings.AppSettings["SSHUsername"].ToString(); string sshPassword = System.Configuration.ConfigurationSettings.AppSettings["SSHPassword"].ToString(); if (sshServer.Length > 0 && sshUsername.Length > 0 && sshPassword.Length > 0) { Scp scp = new Scp(sshServer, sshUsername, sshPassword); //Copy a file from local machine to remote SSH server scp.Connect(); Log("Connected to " + sshServer); //scp.Put(inputPath, "/home/wal/temp.txt"); scp.Put(inputPath, GetName(inputPath)); scp.Close(); } else { Log("Not SCP as missing login details"); } } private string GetName(string inputPath) { FileInfo info = new FileInfo(inputPath); return info.Name; } private string GetNameFromDir(string inputPath) { DirectoryInfo info = new DirectoryInfo(inputPath); return info.Name; } private static void Log(string msg) { try { string toLog = DateTime.Now.ToString() + ": " + msg; System.Diagnostics.Debug.WriteLine(toLog); System.IO.FileStream fs = new System.IO.FileStream(baseDir + "app.log", System.IO.FileMode.OpenOrCreate, System.IO.FileAccess.ReadWrite); System.IO.StreamWriter m_streamWriter = new System.IO.StreamWriter(fs); m_streamWriter.BaseStream.Seek(0, System.IO.SeekOrigin.End); m_streamWriter.WriteLine(toLog); m_streamWriter.Flush(); m_streamWriter.Close(); fs.Close(); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private byte[] GetFileBytes(string path) { FileStream stream = new FileStream(path, FileMode.Open); byte[] bytes = new byte[stream.Length]; stream.Read(bytes, 0, bytes.Length); stream.Close(); return bytes; } private void WriteFileBytes(byte[] bytes, string path) { FileStream stream = new FileStream(path, FileMode.Create); stream.Write(bytes, 0, bytes.Length); stream.Close(); } private void UnzipFile(string inputPath, string outputPath) { ZipInputStream zis = new ZipInputStream(File.OpenRead(inputPath)); ZipEntry theEntry = zis.GetNextEntry(); FileStream streamWriter = File.Create(outputPath); int size = 2048; byte[] data = new byte[2048]; while (true) { size = zis.Read(data, 0, data.Length); if (size > 0) { streamWriter.Write(data, 0, size); } else { break; } } streamWriter.Close(); zis.Close(); } private bool ExcludeDir(string dirName) { foreach (string excludeDir in this.excludeDirs) { if (dirName == excludeDir) return true; } return false; } private void ZipFile(string inputPath, string zipName) { FileAttributes fa = File.GetAttributes(inputPath); if ((fa & FileAttributes.Directory) != 0) { string dirName = zipName + "/"; ZipEntry entry1 = new ZipEntry(dirName); this.zipOutputStream.PutNextEntry(entry1); string[] subDirs = Directory.GetDirectories(inputPath); //create directories first foreach (string subDir in subDirs) { DirectoryInfo info = new DirectoryInfo(subDir); string name = info.Name; if (this.ExcludeDir(name)) Log("Excluding " + dirName + name); else this.ZipFile(subDir, dirName + name); } //then store files string[] fileNames = Directory.GetFiles(inputPath); foreach (string fileName in fileNames) { FileInfo info = new FileInfo(fileName); string name = info.Name; this.ZipFile(fileName, dirName + name); } } else { Crc32 crc = new Crc32(); this.zipOutputStream.SetLevel(6); // 0 - store only to 9 - means best compression FileStream fs = null; try { fs = File.OpenRead(inputPath); } catch (IOException ioEx) { Log("WARNING! " + ioEx.Message);//might be in use, skip file in this case } if (fs != null) { byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); ZipEntry entry = new ZipEntry(zipName); entry.DateTime = DateTime.Now; // set Size and the crc, because the information // about the size and crc should be stored in the header // if it is not set it is automatically written in the footer. // (in this case size == crc == -1 in the header) // Some ZIP programs have problems with zip files that don't store // the size and crc in the header. entry.Size = fs.Length; fs.Close(); crc.Reset(); crc.Update(buffer); entry.Crc = crc.Value; this.zipOutputStream.PutNextEntry(entry); this.zipOutputStream.Write(buffer, 0, buffer.Length); } } } private void Encrypt(string inputPath, string outputPath) { RijndaelManaged rijndaelManaged = new RijndaelManaged(); byte[] encrypted; byte[] toEncrypt; //Create a new key and initialization vector. //myRijndael.GenerateKey(); //myRijndael.GenerateIV(); /*des.GenerateKey(); des.GenerateIV(); string temp1 = Convert.ToBase64String(des.Key); string temp2 = Convert.ToBase64String(des.IV);*/ //Get the key and IV. byte[] key = Convert.FromBase64String(keyString); byte[] IV = Convert.FromBase64String(ivString); //Get an encryptor. ICryptoTransform encryptor = rijndaelManaged.CreateEncryptor(key, IV); //Encrypt the data. MemoryStream msEncrypt = new MemoryStream(); CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write); //Convert the data to a byte array. toEncrypt = this.GetFileBytes(inputPath); //Write all data to the crypto stream and flush it. csEncrypt.Write(toEncrypt, 0, toEncrypt.Length); csEncrypt.FlushFinalBlock(); //Get encrypted array of bytes. encrypted = msEncrypt.ToArray(); WriteFileBytes(encrypted, outputPath); } private void Decrypt(string inputPath, string outputPath) { RijndaelManaged myRijndael = new RijndaelManaged(); //DES des = new DESCryptoServiceProvider(); byte[] key = Convert.FromBase64String(keyString); byte[] IV = Convert.FromBase64String(ivString); byte[] encrypted = this.GetFileBytes(inputPath); byte[] fromEncrypt; //Get a decryptor that uses the same key and IV as the encryptor. ICryptoTransform decryptor = myRijndael.CreateDecryptor(key, IV); //Now decrypt the previously encrypted message using the decryptor // obtained in the above step. MemoryStream msDecrypt = new MemoryStream(encrypted); CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read); fromEncrypt = new byte[encrypted.Length]; //Read the data out of the crypto stream. int bytesRead = csDecrypt.Read(fromEncrypt, 0, fromEncrypt.Length); byte[] readBytes = new byte[bytesRead]; Array.Copy(fromEncrypt, 0, readBytes, 0, bytesRead); this.WriteFileBytes(readBytes, outputPath); } private string GetFileName(string extension) { return baseDir + backupName + "_" + DateTime.Now.ToString("yyyyMMdd") + "." + extension; } private void BackupDB(string backupPath) { string sql = @"DECLARE @Date VARCHAR(300), @Dir VARCHAR(4000) --Get today date SET @Date = CONVERT(VARCHAR, GETDATE(), 112) --Set the directory where the back up file is stored SET @Dir = '"; sql += backupPath; sql += @"' --create a 'device' to write to first EXEC sp_addumpdevice 'disk', 'temp_device', @Dir --now do the backup BACKUP DATABASE " + this.dbName; sql += @" TO temp_device WITH FORMAT --Drop the device EXEC sp_dropdevice 'temp_device' "; //Console.WriteLine("sql="+sql); Backup.Log("Starting backup of " + this.dbName); ExecuteSQL(sql); } /// <summary> /// Executes the specified SQL /// Returns true if no errors were encountered during execution /// </summary> /// <param name="procedureName"></param> private void ExecuteSQL(string sql) { SqlConnection conn = new SqlConnection(this.GetDBConnectString()); try { SqlCommand comm = new SqlCommand(sql, conn); conn.Open(); comm.ExecuteNonQuery(); } finally { conn.Close(); } } private string GetDBConnectString() { StringBuilder builder = new StringBuilder(); builder.Append("Data Source=127.0.0.1; User ID="); builder.Append(this.dbUsername); builder.Append("; Password="); builder.Append(this.dbPassword); builder.Append("; Initial Catalog="); builder.Append(this.dbName); builder.Append(";Connect Timeout=30"); return builder.ToString(); } } } }

    Read the article

  • OpenGL 3.x Assimp trouble implementing phong shading (normals?)

    - by Defcronyke
    I'm having trouble getting phong shading to look right. I'm pretty sure there's something wrong with either my OpenGL calls, or the way I'm loading my normals, but I guess it could be something else since 3D graphics and Assimp are both still very new to me. When trying to load .obj/.mtl files, the problems I'm seeing are: The models seem to be lit too intensely (less phong-style and more completely washed out, too bright). Faces that are lit seem to be lit equally all over (with the exception of a specular highlight showing only when the light source position is moved to be practically right on top of the model) Because of problems 1 and 2, spheres look very wrong: picture of sphere And things with larger faces look (less-noticeably) wrong too: picture of cube I could be wrong, but to me this doesn't look like proper phong shading. Here's the code that I think might be relevant (I can post more if necessary): file: assimpRenderer.cpp #include "assimpRenderer.hpp" namespace def { assimpRenderer::assimpRenderer(std::string modelFilename, float modelScale) { initSFML(); initOpenGL(); if (assImport(modelFilename)) // if modelFile loaded successfully { initScene(); mainLoop(modelScale); shutdownScene(); } shutdownOpenGL(); shutdownSFML(); } assimpRenderer::~assimpRenderer() { } void assimpRenderer::initSFML() { windowWidth = 800; windowHeight = 600; settings.majorVersion = 3; settings.minorVersion = 3; app = NULL; shader = NULL; app = new sf::Window(sf::VideoMode(windowWidth,windowHeight,32), "OpenGL 3.x Window", sf::Style::Default, settings); app->setFramerateLimit(240); app->setActive(); return; } void assimpRenderer::shutdownSFML() { delete app; return; } void assimpRenderer::initOpenGL() { GLenum err = glewInit(); if (GLEW_OK != err) { /* Problem: glewInit failed, something is seriously wrong. */ std::cerr << "Error: " << glewGetErrorString(err) << std::endl; } // check the OpenGL context version that's currently in use int glVersion[2] = {-1, -1}; glGetIntegerv(GL_MAJOR_VERSION, &glVersion[0]); // get the OpenGL Major version glGetIntegerv(GL_MINOR_VERSION, &glVersion[1]); // get the OpenGL Minor version std::cout << "Using OpenGL Version: " << glVersion[0] << "." << glVersion[1] << std::endl; return; } void assimpRenderer::shutdownOpenGL() { return; } void assimpRenderer::initScene() { // allocate heap space for VAOs, VBOs, and IBOs vaoID = new GLuint[scene->mNumMeshes]; vboID = new GLuint[scene->mNumMeshes*2]; iboID = new GLuint[scene->mNumMeshes]; glClearColor(0.4f, 0.6f, 0.9f, 0.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glEnable(GL_CULL_FACE); shader = new Shader("shader.vert", "shader.frag"); projectionMatrix = glm::perspective(60.0f, (float)windowWidth / (float)windowHeight, 0.1f, 100.0f); rot = 0.0f; rotSpeed = 50.0f; faceIndex = 0; colorArrayA = NULL; colorArrayD = NULL; colorArrayS = NULL; normalArray = NULL; genVAOs(); return; } void assimpRenderer::shutdownScene() { delete [] iboID; delete [] vboID; delete [] vaoID; delete shader; } void assimpRenderer::renderScene(float modelScale) { sf::Time elapsedTime = clock.getElapsedTime(); clock.restart(); if (rot > 360.0f) rot = 0.0f; rot += rotSpeed * elapsedTime.asSeconds(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, -3.0f, -10.0f)); // move back a bit modelMatrix = glm::scale(glm::mat4(1.0f), glm::vec3(modelScale)); // scale model modelMatrix = glm::rotate(modelMatrix, rot, glm::vec3(0, 1, 0)); //modelMatrix = glm::rotate(modelMatrix, 25.0f, glm::vec3(0, 1, 0)); glm::vec3 lightPosition( 0.0f, -100.0f, 0.0f ); float lightPositionArray[3]; lightPositionArray[0] = lightPosition[0]; lightPositionArray[1] = lightPosition[1]; lightPositionArray[2] = lightPosition[2]; shader->bind(); int projectionMatrixLocation = glGetUniformLocation(shader->id(), "projectionMatrix"); int viewMatrixLocation = glGetUniformLocation(shader->id(), "viewMatrix"); int modelMatrixLocation = glGetUniformLocation(shader->id(), "modelMatrix"); int ambientLocation = glGetUniformLocation(shader->id(), "ambientColor"); int diffuseLocation = glGetUniformLocation(shader->id(), "diffuseColor"); int specularLocation = glGetUniformLocation(shader->id(), "specularColor"); int lightPositionLocation = glGetUniformLocation(shader->id(), "lightPosition"); int normalMatrixLocation = glGetUniformLocation(shader->id(), "normalMatrix"); glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]); glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); glUniform3fv(lightPositionLocation, 1, lightPositionArray); for (unsigned int i = 0; i < scene->mNumMeshes; i++) { colorArrayA = new float[3]; colorArrayD = new float[3]; colorArrayS = new float[3]; material = scene->mMaterials[scene->mNumMaterials-1]; normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glUniformMatrix3fv(normalMatrixLocation, 1, GL_FALSE, normalArray); aiColor3D ambient(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_AMBIENT, ambient); aiColor3D diffuse(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_DIFFUSE, diffuse); aiColor3D specular(0.0f, 0.0f, 0.0f); material->Get(AI_MATKEY_COLOR_SPECULAR, specular); colorArrayA[0] = ambient.r; colorArrayA[1] = ambient.g; colorArrayA[2] = ambient.b; colorArrayD[0] = diffuse.r; colorArrayD[1] = diffuse.g; colorArrayD[2] = diffuse.b; colorArrayS[0] = specular.r; colorArrayS[1] = specular.g; colorArrayS[2] = specular.b; // bind color for each mesh glUniform3fv(ambientLocation, 1, colorArrayA); glUniform3fv(diffuseLocation, 1, colorArrayD); glUniform3fv(specularLocation, 1, colorArrayS); // render all meshes glBindVertexArray(vaoID[i]); // bind our VAO glDrawElements(GL_TRIANGLES, scene->mMeshes[i]->mNumFaces*3, GL_UNSIGNED_INT, 0); glBindVertexArray(0); // unbind our VAO delete [] normalArray; delete [] colorArrayA; delete [] colorArrayD; delete [] colorArrayS; } shader->unbind(); app->display(); return; } void assimpRenderer::handleEvents() { sf::Event event; while (app->pollEvent(event)) { if (event.type == sf::Event::Closed) { app->close(); } if ((event.type == sf::Event::KeyPressed) && (event.key.code == sf::Keyboard::Escape)) { app->close(); } if (event.type == sf::Event::Resized) { glViewport(0, 0, event.size.width, event.size.height); } } return; } void assimpRenderer::mainLoop(float modelScale) { while (app->isOpen()) { renderScene(modelScale); handleEvents(); } } bool assimpRenderer::assImport(const std::string& pFile) { // read the file with some example postprocessing scene = importer.ReadFile(pFile, aiProcess_CalcTangentSpace | aiProcess_Triangulate | aiProcess_JoinIdenticalVertices | aiProcess_SortByPType); // if the import failed, report it if (!scene) { std::cerr << "Error: " << importer.GetErrorString() << std::endl; return false; } return true; } void assimpRenderer::genVAOs() { int vboIndex = 0; for (unsigned int i = 0; i < scene->mNumMeshes; i++, vboIndex+=2) { mesh = scene->mMeshes[i]; indexArray = new unsigned int[mesh->mNumFaces * sizeof(unsigned int) * 3]; // convert assimp faces format to array faceIndex = 0; for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; std::memcpy(&indexArray[faceIndex], face->mIndices, sizeof(float) * 3); faceIndex += 3; } // generate VAO glGenVertexArrays(1, &vaoID[i]); glBindVertexArray(vaoID[i]); // generate IBO for faces glGenBuffers(1, &iboID[i]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint) * mesh->mNumFaces * 3, indexArray, GL_STATIC_DRAW); // generate VBO for vertices if (mesh->HasPositions()) { glGenBuffers(1, &vboID[vboIndex]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, mesh->mVertices, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)0); glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0); } // generate VBO for normals if (mesh->HasNormals()) { normalArray = new float[scene->mMeshes[i]->mNumVertices * 3]; unsigned int normalIndex = 0; for (unsigned int j = 0; j < scene->mMeshes[i]->mNumVertices * 3; j+=3, normalIndex++) { normalArray[j] = scene->mMeshes[i]->mNormals[normalIndex].x; // x normalArray[j+1] = scene->mMeshes[i]->mNormals[normalIndex].y; // y normalArray[j+2] = scene->mMeshes[i]->mNormals[normalIndex].z; // z } normalIndex = 0; glGenBuffers(1, &vboID[vboIndex+1]); glBindBuffer(GL_ARRAY_BUFFER, vboID[vboIndex+1]); glBufferData(GL_ARRAY_BUFFER, mesh->mNumVertices * sizeof(GLfloat) * 3, normalArray, GL_STATIC_DRAW); glEnableVertexAttribArray((GLuint)1); glVertexAttribPointer((GLuint)1, 3, GL_FLOAT, GL_FALSE, 0, 0); delete [] normalArray; } // tex coord stuff goes here // unbind buffers glBindVertexArray(0); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); delete [] indexArray; } vboIndex = 0; return; } } file: shader.vert #version 150 core in vec3 in_Position; in vec3 in_Normal; uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; uniform vec3 lightPosition; uniform mat3 normalMatrix; smooth out vec3 vVaryingNormal; smooth out vec3 vVaryingLightDir; void main() { // derive MVP and MV matrices mat4 modelViewProjectionMatrix = projectionMatrix * viewMatrix * modelMatrix; mat4 modelViewMatrix = viewMatrix * modelMatrix; // get surface normal in eye coordinates vVaryingNormal = normalMatrix * in_Normal; // get vertex position in eye coordinates vec4 vPosition4 = modelViewMatrix * vec4(in_Position, 1.0); vec3 vPosition3 = vPosition4.xyz / vPosition4.w; // get vector to light source vVaryingLightDir = normalize(lightPosition - vPosition3); // Set the position of the current vertex gl_Position = modelViewProjectionMatrix * vec4(in_Position, 1.0); } file: shader.frag #version 150 core out vec4 out_Color; uniform vec3 ambientColor; uniform vec3 diffuseColor; uniform vec3 specularColor; smooth in vec3 vVaryingNormal; smooth in vec3 vVaryingLightDir; void main() { // dot product gives us diffuse intensity float diff = max(0.0, dot(normalize(vVaryingNormal), normalize(vVaryingLightDir))); // multiply intensity by diffuse color, force alpha to 1.0 out_Color = vec4(diff * diffuseColor, 1.0); // add in ambient light out_Color += vec4(ambientColor, 1.0); // specular light vec3 vReflection = normalize(reflect(-normalize(vVaryingLightDir), normalize(vVaryingNormal))); float spec = max(0.0, dot(normalize(vVaryingNormal), vReflection)); if (diff != 0) { float fSpec = pow(spec, 128.0); // Set the output color of our current pixel out_Color.rgb += vec3(fSpec, fSpec, fSpec); } } I know it's a lot to look through, but I'm putting most of the code up so as not to assume where the problem is. Thanks in advance to anyone who has some time to help me pinpoint the problem(s)! I've been trying to sort it out for two days now and I'm not getting anywhere on my own.

    Read the article

  • Why is firefox 4 not HW accelerated?

    - by acidzombie24
    At first i thought it was my computer but then i tried chrome. Why isnt firefox not hardware accelerated? The first screenshot shows chrome at 23% usage. The 2nd shows 59%. I have 2 cpus which is why it isnt 100% usage. The game pictured is biolab Heres the text for about:support Application Basics Name Firefox Version 4.0 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0 Profile Directory Open Containing Folder Enabled Plugins about:plugins Build Configuration about:buildconfig Extensions Name Version Enabled ID Modified Preferences Name Value accessibility.typeaheadfind.flashBar 0 browser.places.importBookmarksHTML false browser.places.smartBookmarksVersion 2 browser.startup.homepage_override.buildID 20110303194838 browser.startup.homepage_override.mstone rv:2.0 extensions.lastAppVersion 4.0 gfx.font_rendering.directwrite.enabled true network.cookie.prefsMigrated true places.history.expiration.transient_current_max_pages 127602 privacy.sanitize.migrateFx3Prefs true Graphics Adapter Description Mobile Intel(R) 4 Series Express Chipset Family Vendor ID 8086 Device ID 2a42 Adapter RAM Unknown Adapter Drivers igdumd64 igd10umd64 igdumdx32 igd10umd32 Driver Version 8.15.10.2202 Driver Date 8-25-2010 Direct2D Enabled true DirectWrite Enabled true (6.1.7600.16385, font cache n/a) WebGL Renderer Google Inc. -- ANGLE -- OpenGL ES 2.0 (ANGLE 0.0.0.541) GPU Accelerated Windows 1/1 Direct3D 10

    Read the article

  • UNC Path fails by IP "no network provider accepted the given network path", but works using hostname

    - by BoyMars
    I have an unusual problem with a Windows Server 2003 (Standard x86) box. It appears the machine will not accept connections to its shares (locally and from other domain member servers) by using its ip address in a UNC path. The error returned is: "no network provider accepted the given network path" This is the case with the machine's ip address: \\10.0.8.x and even the loopback address: \\127.0.0.1 \\localhost does not work... but using the hostname (fqdn or not) works: \\server & \\server.domain.local The local windows firewall for this server is off, ping/rdp/other services respond fine using the IP address. The following services are running and have been restarted: Computer Browser Workstation Server The server itself has been rebooted too. Event 8032 in the system log indicates that: The browser service has failed to retrieve the backup list too many times on transport \Device\NetBT_Tcpip_{29A6A925-AFB3-47E2-BA59-DDA086DEAE7A}. The backup browser is stopping. The domain controller has not been restarted, no other servers have experienced this problem, yet there are a number of browser (8021) related errors in the logs on this server. Does anyone have any suggestions? I would like to avoid rejoining this server to the domain if possible.

    Read the article

  • Why am I getting Network error: 403 Forbidden in firebug for files I am not trying to access?

    - by moomoochoo
    QUESTIONs I'd like to know why I am getting Network error: 403 Forbidden in firebug for files that I am not trying to access? is it likely to cause any serious problems on the webserver? how to fix it. Why is my browser trying to access those files in the error message? DETAILS I’m using wampserver 2.2 to access a folder via the browser. The browser is on the same computer as the server. The computer is running windows 7 ultimate. When I view a web folder via my browser hXXp://localhost/folder I can see the folder contents ok but in firebug I get Network error: 403 Forbidden I’m not deliberately trying to access those files in the error msgs. You will notice they are in a completely different folder to the one I am looking at. I check the apache_error.log and see [Wed Sep 26 00:05:10 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/apache2, referer: hxxp://localhost/folder/ Wampserver 2.2 is installed on D drive. I took a look at the httpd.conf file but I couldn't find any references to c: When I look in Apache’s access.log I see 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/blank.gif HTTP/1.1" 403 217 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/back.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/text.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/unknown.gif HTTP/1.1" 403 219 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/folder.gif HTTP/1.1" 403 218 CONFIGURATION Wampserver 2.2 installed on Drive D Apache 2.2.22 PHP 5.4.3 MySQL 5.5.24 Firebug 1.10.3 Firefox 15.0.1

    Read the article

  • How does a web server/the http protocol handle version control and compression?

    - by Sune Rasmussen
    When a client browser requests a file from the web server, I know that some kind of check is performed, because the files needed to serve the web page may already be cached by the web browser. So, if a file exists in the cache, no files are sent. But if the file on the server has changed since the file was cached in the browser, the file is sent and updated anyhow. Then, if you have compression like gzipping enabled on the server, the files that are to be provided to the client must be gzipped on the way, requiring some amount of server side processing. But how is this managed? The logical approach seems to me, that the web server should have a cache as well, containing the newest version of all files that have been requested within a certain time span, thus a compressed version of these files, so that compression would not have to be done each time a files is requested. And also, how are files eventually requested? Does the browser ask for files, each time it encounters one in the HTML code and the specific file is not stored in the local cache, or does it sum all the files that are needed up and ask for the whole bunch at the same time? But that's only guessing from a programming point of view, and I don't really know. If the answers are very different among web server systems, I'm primarily interested in Apache, but other answers are appreciated, too.

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Apache subdomain not working

    - by tandu
    I'm running apache on my local machine and I'm trying to create a subdomain, but it's not working. Here is what I have (stripped down): <VirtualHost *:80> DocumentRoot /var/www/one ServerName one.localhost </VirtualHost> <VirtualHost *:80> DocumentRoot /var/www/two ServerName two.localhost </VirtualHost> I recently added one. The two entry has been around for a while, and it still works fine (displays the webpage when I go to two.localhost). In fact, I copied the entire two.localhost entry and simply changed two to one, but it's not working. I have tried each of the following: * `apachectl -k graceful` * `apachectl -k restart` * `/etc/init.d/apache2 restart` * `/etc/init.d/apache2 stop && !#:0 start` Apache will complain if /var/www/one does not exist, so I know it's doing something, but when I visit one.localhost in my browser, the browser complains that nothing is there. I put an index.html file there and also tried going to one.localhost/index.html directly, and the browser still won't fine it. This is very perplexing since the entry I copied from two.localhost is exactly the same .. not only that, but if something were wrong I would expect to get a 500 rather than the browser not being able to find anything. The error_log also has nothing extra.

    Read the article

  • Firefox 3.6 and above. Always show one tab even if all tabs are closed like Firefox 3.0

    - by Jayapal Chandran
    I very much got used to Firefox 3.0. In that, to free the memory, I close all tabs but still the main Firefox window does not close. But in Firefox 3.6 and later if I want to close all tabs and if I do so then Firefox totally exists. This is not the case with 3.0. How to stop Firefox from not closing the main process even if I close all threads (tabs)? The autocomplete feature in the address bar of Firefox 3.6 and greater is in a dark blue color which makes me very much annoyed. With my environment and the monitor glare that is inducing anger in me, so how the color be changed to be like Firefox 3.0? Because you know that black and white are a neutral and good combination and since I have been working in Firefox 3.0 (and earlier versions) for a long time this new color change and other uncomfortable options are making me sick. To check CSS3 I need to use Firefox 3.5 and greater. Besides I like Firefox because it includes the W3C's recommendations so I can learn and test new recomendations from W3C.

    Read the article

  • Big and reaaaally strange problem with a web server (host InMotion Hosting)

    - by altar
    Hi. I have a terrible problem that I have tried to solve since three days ago: I browse my own web site and after a while I cannot access the web site. AT ALL! I can only see a 501 error message: "Method Not Implemented. GET to / not supported. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request." Once I get that error the site is totally and permanently inaccessible in that browser!! Reboot, browser restarts, clear cache, clear all history and cookies, etc., are not working. I have reproduced it in 4 different computers. Three computers are in one city, the 4th is in another city. Two different ISP's also. One computer is Linux, the others are on Windows (XP and 2000). Browsers are FF 3 to FF3.5 and IE 8. The error is ALMOST reproducible on demand (for me at least). It appears when I browse the forum under certain circumstances. I don't know which are these circumstances, but if I browse it long enough (10 sec to 5 minutes) it eventually appears. Just to make it clear, once the error appears (while browsing the forum) then the whole web site become inaccessible, not only the forum! My host is not willing to help because they say they cannot reproduce the error. I sent screenoshots but they don't care. NEWS Reseting browser's settings from the 'Tools-Clear private data' din't worked. However! When I have cleared the same settings (more exaclty) cookies from the special menu that appears when you right click website's icon, it worked. So it was something related to a cookie BUT it manifests in all browsers (FF, IE, Opera). So it cannot be a browser related problem.

    Read the article

  • ASP.NET, IS7 and IE8 caching?

    - by jdege
    We're suddenly having problems with some of our sites having old versions of .css and .js files show up in the browser. Generally, these problems go away, when the user clears cache in the browser. Is there something we can do either in the code or in IIS7, to convince the browser to not used the cached files? In our weirdest case, we have one customer whose users hit our site, and get an old version of a js file. They clear cache, load the page, get the current version, and the page runs fine. Then they load the file again, and suddenly have the old version, again. Any ideas as to how that might be happening? I can think of three: The browser is somehow holding on to the old version, when we clear cache, and is putting it back in the cache, before the second page load. One of our servers has an old version of the file, and while the first page load after a clear cache pulls it from one of the servers with the current version, second and subsequent page loads pull it from the server that has the old version. The first load after a clear cache goes straight to our servers, while subsequent loads pull the file from the cache on the customer's web proxy. I have to say, all three of those scenarios seem outlandishly unlikely, but it's a repeatable behavior. Any ideas?

    Read the article

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Add Zune Desktop Player to Windows 7 Media Center

    - by DigitalGeekery
    Are you a Zune owner who prefers the Zune player for media playback? Today we’ll show you how to integrate the Zune player with WMC using Media Center Studio. You’ll need to download Media Center Studio and the Zune Desktop player software. (See download links below) Also, make sure you have Media Center closed. Some of the actions in Media Center Studio cannot be performed while WMC is open. Open Media Center Studio and click on the Start Menu tab at the top of the application.   Click the Application button. Here we will create an Entry Point for the Zune player so that we can add it to Media Center. Type in a name for your entry point in the title text box. This is the name that will appear under the tile when added to the Media Center start menu. Next, type in the path to the Zune player. By default this should be C:\Program Files\Zune\Zune.exe. Note: Be sure to use the original path, not a link to the desktop icon.   The Active image is the image that will appear on the tile in Media Center. If you wish to change the default image, click the Browse button and select a different image. Select Stop the currently playing media from the When launched do the following: dropdown list.  Otherwise, if you open Zune player from WMC while playing another form of media, that media will continue to play in the background.   Now we will choose a keystroke to use to exit the Zune player software and return to Media Center. Click on the the green plus (+) button. When prompted, press a key to use to the close the Zune player. Note: This may also work with your Media Center remote. You may want to set a keyboard keystroke as well as a button on your remote to close the program. You may not be able to set certain remote buttons to close the application. We found that the back arrow button worked well. You can also choose a keystroke to kill the program if desired. Be sure to save your work before exiting by clicking the Save button on the Home tab.   Next, select the Start Menu tab and click on the next to Entry points to reveal the available entry points. Find the Zune player tile in the Entry points area. We want to drag the tile out onto one of the menu strips on the start menu. We will drag ours onto the Extras Library strip. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. When you’ve dragged the tile over any of the green plus signs, the  red “Move” label will turn to a blue “Move to” label. Now you can drop the tile into position. Save your changes and then close Media Center Studio. When you open Media Center, you should see your Zune tile on the start menu. When you select the Zune tile in WMC, Media Center will be minimized and Zune player will be launched. Now you can enjoy your media through the Zune player. When you close Zune player with the previously assigned keystroke or by clicking the “X” at the top right, Windows Media Center will be re-opened. Conclusion We found the Zune player worked with two different Media Center remotes that we tested. It was a times a little tricky at times to tell where you were when navigating through the Zune software with a remote, but it did work. In addition to managing your music, the Zune player is a nice way to add podcasts to your Media Center setup. We should also mention that you don’t need to actually own a Zune to install and use the Zune player software. Media Center Studio works on both Vista and Windows 7. We covered Media Center Studio a bit more in depth in a previous post on customizing the Windows Media Center start menu. Are you new to Zune player? Familiarize yourself a bit more by checking out some of our earlier posts like how to update your Zune player, and experiencing your music a whole new way with Zune for PC.   Downloads Zune Desktop Player download Media Center Studio download Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Java Embedded @ JavaOne: Q & A

    - by terrencebarr
    There has been a lot of interest in Java Embedded @ JavaOne since it was announced a short while ago (see my previous post). As this is a new conference we did get a number of questions regarding the conference. So we put together a brief Q & A on audience focus, dates, registrations, pricing, submissions, etc. Hope this helps and, remember, the Call for Papers ends next week, Jul 18th 2012! Cheers, – Terrence    Java Embedded @ JavaOne : Q & A  Q. Where can I learn more about “Java Embedded @ JavaOne”? A. Please visit: http://oracle.com/javaone/embedded Q. What is the purpose of “Java Embedded @ JavaOne”? A. This net-new event is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. Q. What broad audiences would benefit by attending “Java Embedded @ JavaOne”? A. Java licensees; Government agencies; ISVs, Device Manufacturers; Service Providers such as Telcos, Utilities, Healthcare, Energy, Smart Grid/Smart Metering; Automotive/Telematics; Home/Building Automation; Factory Automation; Media/TV; and Payment vendors. Q. What business titles would benefit by attending “Java Embedded @ JavaOne”? A. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads). Q. When is “Java Embedded @ JavaOne” taking place? A. The event takes place on Wednesday, Oct. 3th through Thursday, Oct. 4th. Q. Where is “Java Embedded @ JavaOne” taking place? A. The event takes place in the Hotel Nikko. Q. Won’t “Java Embedded @ JavaOne” impact the flagship JavaOne conference since the Hotel Nikko is one of the 3 flagship JavaOne conference’s venue hotels? A. No. Separate space in the Hotel Nikko will be used for “Java Embedded @ JavaOne” and will in no way impact scale and scope of the flagship JavaOne conference’s content mix. Q. Will there be a call for papers for “Java Embedded @ JavaOne”? A. Yes.  The call for papers has started but is ONLY for business focused submissions. Q. What type of business submissions can I make for “Java Embedded @ JavaOne”? A. We are accepting 3 types of business submissions: Best Practices: Java Embedded business solutions, methods, and techniques that consistently show results superior to those achieved with other means, as well as discussions on how Java Embedded can improve business operations, and increase competitive differentiation and profitability. Case Studies: Discussions with Oracle customers and partners that describe the unique business drivers that convinced them to implement Java Embedded as part of an infrastructure technology mix. The discussions will highlight the issues they faced, the decision making involved, and the implementation choices made to create value and improve business differentiation. Panel: Moderator-driven open discussion focused on the emerging opportunities Java Embedded offers businesses, as well as other topics such as strategy, overcoming common challenges, etc. Q. What is the call for papers timeline for “Java Embedded @ JavaOne”? A. The timeline is as follows: CFP Launched – June 18th Deadline for submissions – July 18th Notifications (Accepts/Declines) – week of July 29th Deadline for speakers to accept speaker invitation – August 10th Presentations due for review – August 31st Q. Where can I find more call for paper details for “Java Embedded @ JavaOne”? A. Please go to: http://www.oracle.com/javaone/embedded/call-for-papers/information/index.html Q. How much does it cost to attend “Java Embedded @ JavaOne”? A. The cost to attend is: $595.00 U.S. — Early Bird (Launch date – July 13, 2012) $795.00 U.S. — Pre-Registration (July 14 – September 28, 2012) $995.00 U.S. — Onsite Registration (September 29 – October 4, 2012) Q. Can an attendee of the flagship JavaOne event and Oracle OpenWorld attend “Java Embedded @ JavaOne”? ?A. Yes.  Attendees of both the flagship JavaOne event and Oracle OpenWorld can attend “Java Embedded @ JavaOne” by purchasing a $100.00 U.S. upgrade to their full conference pass. Filed under: Mobile & Embedded Tagged: Call for Papers, Java Embedded @ JavaOne, JavaOne San Francisco

    Read the article

  • 15 Oracle customers are 'Winners' at Progressive Mfgs 100 Awards

    - by [email protected]
    This year, 15 Oracle customers will receive awards at the Managing Automation's PM100 Event  for their outstanding accomplishments in a number of supply chain applications innovation categories. The event will be held at the Breakers Hotel in Palm Beach Fl from May 3-6, 2010. Award winners include: Arvin Meritor, Ball Aerospace, US Dept. of Treasury/Engraving, Doosan Infracore, Freescale Semi, Ingersoll-Rand, JDS Uniphase, L&L Products, Masco Builders, Mercury Marine Sanmina-SCI, Siemens Water TEch, US Concrete, VirTex Assy Services. Details of the event and Oracle's sponsorship can be found at: http://www.managingautomation.com/awards/ or contact Stephen Slade at [email protected]      

    Read the article

  • Entity Framework 4.0: Creating objects of correct type when using lazy loading

    - by DigiMortal
    In my posting about Entity Framework 4.0 and POCOs I introduced lazy loading in EF applications. EF uses proxy classes for lazy loading and this means we have new types in that come and go dynamically in runtime. We don’t have these types available when we write code but we cannot forget that EF may expect us to use dynamically generated types. In this posting I will give you simple hint how to use correct types in your code. The background of lazy loading and proxy classes As a first thing I will explain you in short what is proxy class. Business classes when designed correctly have no knowledge about their birth and death – they don’t know how they are created and they don’t know how their data is persisted. This is the responsibility of object runtime. When we use lazy loading we need a little bit different classes that know how to load data for properties when code accesses the property first time. As we cannot add this functionality to our business classes (they may be stored through more than one data access technology or by more than one Data Access Layer (DAL)) we create proxy classes that extend our business classes. If we have class called Product and product has lazy loaded property called Customer then we need proxy class, let’s say ProductProxy, that has same public signature as Product so we can use it INSTEAD OF product in our code. ProductProxy overrides Customer property. If customer is not asked then customer is null. But if we ask for Customer property then overridden property of ProductProxy loads it from database. This is how lazy loading works. Problem – two types for same thing As lazy loading may introduce dynamically generated proxy types we don’t know in our application code which type is returned. We cannot be sure that we have Product not ProductProxy returned. This leads us to the following question: how can we create Product of correct type if we don’t know the correct type? In EF solution is simple. Solution – use factory methods If you are using repositories and you are not using factories (imho it is pretty pointless with mapper) you can add factory methods to your EF based repositories. Take a look at this class. public class Event {     public int ID { get; set; }     public string Title { get; set; }     public string Location { get; set; }     public virtual Party Organizer { get; set; }     public DateTime Date { get; set; } } We have virtual member called Organizer. This property is virtual because we want to use lazy loading on this class so Organizer is loaded only when we ask it. EF provides us with method called CreateObject<T>(). CreateObject<T>() is member of ObjectContext class and it creates the object based on given type. In runtime proxy type for Event is created for us automatically and when we call CreateObject<T>() for Event it returns as object of Event proxy type. The factory method for events repository is as follows. public Event CreateEvent() {     var evt = _context.CreateObject<Event>();     return evt; } And we are done. Instead of creating factory classes we created factory methods that guarantee that created objects are of correct type. Conclusion Although lazy loading introduces some new objects we cannot use at design time because they live only in runtime we can write code without worrying about exact implementation type of object. This holds true until we have clean code and we don’t make any decisions based on object type. EF4.0 provides us with very simple factory method that create and return objects of correct type. All we had to do was adding factory methods to our repositories.

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • SQLAuthority News – Community Tech Days – A SQL Legends in Ahmedabad – December 11, 2010

    - by pinaldave
    Ahmedabad is going to be fortunate city again on December 11. We are going to have SQL Server Legends present at the prestigious event of Community Tech Days in Ahmedabad. The venue details are as following: H K Hall, H K College Campus, Near Handloom House, Opp. Natraj Cinema, Ashram Road, Ahmedabad – 380009 Click here to Registration for the event. Agenda of the event is as following. 10:15am – 10:30am     Welcome – Pinal Dave 10:30am – 11:15am     SQL Tips and Tricks for .NET Developers by Jacob Sebastian 11:15am – 11:30am     Tea Break 11:30am – 12:15pm     Best Database Practice for SharePoint Server by Pinal Dave 12:15pm – 01:00pm     Self Service Business Intelligence by Rushabh Mehta 01:00pm – 02:00pm     Lunch 02:00pm – 02:45pm     Managing your future, Managing your time by Vinod Kumar 02:45pm – 03:30pm     Windows Azure News and Introducing Storage Services by Mahesh Devjibhai Dhola 03:30pm – 03:45pm     Tea Break 03:45pm – 04:30pm     Improve Silverlight application with Threads and MEF by Prabhjot Singh Bakshi 04:30pm – 04:45pm     Thank you – Mahesh Devjibhai Dhola Ahmedabad considers itself extremely fortunate when there are SQL Legends presenting on various subjects in front of community. Here is brief introduction about them in my own words. (Their names are in order of the agenda). 1) Jacob Sebastian (SQL Server MVP) – This person needs no introduction. Every developer and programmer in Ahmedabad and India knows him. He is the one man who is founder of various community-related ideas like SQL Challenges, SQL Quiz and BeyondRelational. He works with me on all the community-related activities; we are extremely good friends. 2) Rushabh Mehta (SQL Server MVP) – If you use SQL Server – you know this man. He is the President of SQL Server of Professional Association (PASS) and one of the leading Business Intelligence (BI) Experts renowned in the world. He has blessed Ahmedabad once before and now doing once again this year. 3) Vinod Kumar (Microsoft Evangelist – SQL Server & BI) – Ahmedabad remembers him very well. During his last visit to Ahmedabad, a fight had almost broke outside the hall amidst the rush to listen him. There were more people standing and listening to him than those who were seated. This is one man Ahmedabad will never forget. 4) and Myself. I will not rate myself in the league of abovementioned experts, but I must say that I am fortunate to have friends like those above. We also have two strong .NET presenters – Mahesh and Prabhjot. During this event, there will be plenty of giveaways, lots of fun, demos and pure technical talk, specifically no marketing and promotion – just pure technical talk. The most interesting part is that all the SQL Legends – Jacob, Rushabh and Vinod are for sure presenting on SQL Server but with a twist. Jacob – He is going to talk about .NET and SQL – Optimization Techniques Rushabh – He is going to talk about SQL and BI – Self Service BI Vinod – He is going to talk about professional development of developers – Managing Time Pinal – Best Practices for SharePoint Database Administrators – SharePoint DBA – I have presented this session earlier. I promise this event is going to be one of the best events held ever. You can read about the earlier event over here. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Devfish Joe Healy in Fort Lauderdale - Cloud Computing and Azure - 03/11/2010 MSDN Tiki Hut

    - by Rainer
    Devfish Joe Healy, Brian Hitney, and Herve Rogero presented excellent sessions on today's MSDN Tiki Hut Event about  Cloud Computing and Azure. This was an developer focused event, starting out with an overview about structure and platform, followed by working code samples running on the platform, and all needed information to get developers started on development for cloud applications. Participants had Q&A opportunities after each session and made good use of it. I am sure that a lot of developers will jump on the Azure train. Azure is on top of my dev project list after that great event! This platform offers endless opportunities for development and businesses. The cloud environment in general is safer, scales better, and is far more cost effective compared to run and maintain your own data center. Posted: Rainer Habermann

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Guest Post: Instantiate SharePoint Workflow On Item Deleted

    - by Brian Jackett
    In this post, guest author Lucas Eduardo Silva will walk you through the steps of instantiating a workflow using an item event receiver from a custom list.  The ItemDeleting event will require approval via the workflow. Foreword     As you may have read recently, I injured my right hand and have had it in a cast for the past 3 weeks.  Due to this I planned to reduce my blogging while my hand heals.  As luck would have it, I was actually approached by someone who asked if they could be a guest author on my blog.  I’ve never had a guest author, but considering my injury now seemed like as good a time as ever to try it out. About the Guest Author     Lucas Eduardo Silva (email) works for CPM Braxis, a sibling company to my employer Sogeti in the CapGemini family.  Lucas and I exchanged emails a few times after one of my  recent posts and continued into various topics.  When I posted that I had injured my hand, Lucas mentioned that he had a post idea that he would like to publish and asked if it could be published on my blog.  The below content is the result of that collaboration. The Problem     Lucas has a big problem.  He has a workflow that he wants to fire every time an item is deleted from a custom list. He has already created the association in the "item deleting event", but needs to approve the deletion but the workflow is finishing first. Lucas put an onWorkflowItemChanged wait for the change of status approval, but it is not being hit. The Solution Note: This solution assumes you have the Visual Studio Extensions for Windows SharePoint Services (VSeWSS) installed to access the SharePoint project templates within VIsual Studio. 1 - Create a workflow that will be activated by ItemEventReceiver. 2 - Create the list by Visual Studio clicking in File -> New -> Project. Select SharePoint, then List Definition. 3 - Select the type of document to be created. List, Document Library, Wiki, Tasks, etc.. 4 - Visual Studio creates the file ItemEventReceiver.cs with all possible events in a list. 5 – In the workflow project, open the workflow.xml and copy the ID. 6 - Uncomment the ItemDeleting and insert the following code by replacing the ID that you copied earlier.   //Cancel the Exclusion properties.Cancel = true;   //Activating Exclusion Workflow SPWorkflowManager workflowManager = properties.ListItem.Web.Site.WorkflowManager;   SPWorkflowAssociation wfAssociation = properties.ListItem.ParentList.WorkflowAssociations. GetAssociationByBaseID(new Guid("37b5aea8-792a-4ded-be25-d283d9fe1f9d"));   workflowManager.StartWorkflow(properties.ListItem, wfAssociation, wfAssociation.AssociationData, true);   properties.Status = SPEventReceiverStatus.CancelNoError;   7 - properties.Cancel cancels the event being activated and executes the code that is inside the event. In the example, it cancels the deletion of the item to start the workflow that will be active as an association list with the workflow ID. 8 - Create and deploy the workflow and the list for SharePoint. 9 - Create a list through the model that was created. 10 - Enable the workflow in the list and Congratulations! Every time you try to delete the item the workflow is activated. TIP: If you really want to delete the item after the workflow is done you will have to delete the item by the workflow.   this.workflowProperties.Site.AllowUnsafeUpdates = true; this.workflowProperties.Item.Delete(); this.workflowProperties.List.Update();   Conclusion     In this guest post Lucas took you through the steps of creating an item deletion approval workflow with an event receiver.  This was also the first time I’ve had a guest author on this blog.  Many thanks to Lucas for putting together this content and offering it.  I haven’t decided how I’d handle future guest authors, mostly because I don’t know if there are others who would want to submit content.  If you do have something that you would like to guest author on my blog feel free to drop me a line and we can discuss.  As a disclaimer, there are no guarantees that it will be published though.  For now enjoy Lucas’ post and look for my return to regular blogging soon.         -Frog Out   <Update 1> If you wish to contact Lucas you can reach him at [email protected] </Update 1>

    Read the article

  • Understanding G1 GC Logs

    - by poonam
    The purpose of this post is to explain the meaning of GC logs generated with some tracing and diagnostic options for G1 GC. We will take a look at the output generated with PrintGCDetails which is a product flag and provides the most detailed level of information. Along with that, we will also look at the output of two diagnostic flags that get enabled with -XX:+UnlockDiagnosticVMOptions option - G1PrintRegionLivenessInfo that prints the occupancy and the amount of space used by live objects in each region at the end of the marking cycle and G1PrintHeapRegions that provides detailed information on the heap regions being allocated and reclaimed. We will be looking at the logs generated with JDK 1.7.0_04 using these options. Option -XX:+PrintGCDetails Here's a sample log of G1 collection generated with PrintGCDetails. 0.522: [GC pause (young), 0.15877971 secs] [Parallel Time: 157.1 ms] [GC Worker Start (ms): 522.1 522.2 522.2 522.2 Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] [Processed Buffers : 2 2 3 2 Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] [GC Worker Other (ms): 0.3 0.3 0.3 0.3 Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] [Clear CT: 0.1 ms] [Other: 1.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.3 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.3 ms] [Eden: 12M(12M)->0B(10M) Survivors: 0B->2048K Heap: 13M(64M)->9739K(64M)] [Times: user=0.59 sys=0.02, real=0.16 secs] This is the typical log of an Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all the application threads are stopped at a safepoint during this time. This pause is made up of several sub-tasks indicated by the indentation in the log entries. Here's is the top most line that gets printed for the Evacuation Pause. 0.522: [GC pause (young), 0.15877971 secs] This is the highest level information telling us that it is an Evacuation Pause that started at 0.522 secs from the start of the process, in which all the regions being evacuated are Young i.e. Eden and Survivor regions. This collection took 0.15877971 secs to finish. Evacuation Pauses can be mixed as well. In which case the set of regions selected include all of the young regions as well as some old regions. 1.730: [GC pause (mixed), 0.32714353 secs] Let's take a look at all the sub-tasks performed in this Evacuation Pause. [Parallel Time: 157.1 ms] Parallel Time is the total elapsed time spent by all the parallel GC worker threads. The following lines correspond to the parallel tasks performed by these worker threads in this total parallel time, which in this case is 157.1 ms. [GC Worker Start (ms): 522.1 522.2 522.2 522.2Avg: 522.2, Min: 522.1, Max: 522.2, Diff: 0.1] The first line tells us the start time of each of the worker thread in milliseconds. The start times are ordered with respect to the worker thread ids – thread 0 started at 522.1ms and thread 1 started at 522.2ms from the start of the process. The second line tells the Avg, Min, Max and Diff of the start times of all of the worker threads. [Ext Root Scanning (ms): 1.6 1.5 1.6 1.9 Avg: 1.7, Min: 1.5, Max: 1.9, Diff: 0.4] This gives us the time spent by each worker thread scanning the roots (globals, registers, thread stacks and VM data structures). Here, thread 0 took 1.6ms to perform the root scanning task and thread 1 took 1.5 ms. The second line clearly shows the Avg, Min, Max and Diff of the times spent by all the worker threads. [Update RS (ms): 38.7 38.8 50.6 37.3 Avg: 41.3, Min: 37.3, Max: 50.6, Diff: 13.3] Update RS gives us the time each thread spent in updating the Remembered Sets. Remembered Sets are the data structures that keep track of the references that point into a heap region. Mutator threads keep changing the object graph and thus the references that point into a particular region. We keep track of these changes in buffers called Update Buffers. The Update RS sub-task processes the update buffers that were not able to be processed concurrently, and updates the corresponding remembered sets of all regions. [Processed Buffers : 2 2 3 2Sum: 9, Avg: 2, Min: 2, Max: 3, Diff: 1] This tells us the number of Update Buffers (mentioned above) processed by each worker thread. [Scan RS (ms): 9.9 9.7 0.0 9.7 Avg: 7.3, Min: 0.0, Max: 9.9, Diff: 9.9] These are the times each worker thread had spent in scanning the Remembered Sets. Remembered Set of a region contains cards that correspond to the references pointing into that region. This phase scans those cards looking for the references pointing into all the regions of the collection set. [Object Copy (ms): 106.7 106.8 104.6 107.9 Avg: 106.5, Min: 104.6, Max: 107.9, Diff: 3.3] These are the times spent by each worker thread copying live objects from the regions in the Collection Set to the other regions. [Termination (ms): 0.0 0.0 0.0 0.0 Avg: 0.0, Min: 0.0, Max: 0.0, Diff: 0.0] Termination time is the time spent by the worker thread offering to terminate. But before terminating, it checks the work queues of other threads and if there are still object references in other work queues, it tries to steal object references, and if it succeeds in stealing a reference, it processes that and offers to terminate again. [Termination Attempts : 1 4 4 6 Sum: 15, Avg: 3, Min: 1, Max: 6, Diff: 5] This gives the number of times each thread has offered to terminate. [GC Worker End (ms): 679.1 679.1 679.1 679.1 Avg: 679.1, Min: 679.1, Max: 679.1, Diff: 0.1] These are the times in milliseconds at which each worker thread stopped. [GC Worker (ms): 156.9 157.0 156.9 156.9 Avg: 156.9, Min: 156.9, Max: 157.0, Diff: 0.1] These are the total lifetimes of each worker thread. [GC Worker Other (ms): 0.3 0.3 0.3 0.3Avg: 0.3, Min: 0.3, Max: 0.3, Diff: 0.0] These are the times that each worker thread spent in performing some other tasks that we have not accounted above for the total Parallel Time. [Clear CT: 0.1 ms] This is the time spent in clearing the Card Table. This task is performed in serial mode. [Other: 1.5 ms] Time spent in the some other tasks listed below. The following sub-tasks (which individually may be parallelized) are performed serially. [Choose CSet: 0.0 ms] Time spent in selecting the regions for the Collection Set. [Ref Proc: 0.3 ms] Total time spent in processing Reference objects. [Ref Enq: 0.0 ms] Time spent in enqueuing references to the ReferenceQueues. [Free CSet: 0.3 ms] Time spent in freeing the collection set data structure. [Eden: 12M(12M)->0B(13M) Survivors: 0B->2048K Heap: 14M(64M)->9739K(64M)] This line gives the details on the heap size changes with the Evacuation Pause. This shows that Eden had the occupancy of 12M and its capacity was also 12M before the collection. After the collection, its occupancy got reduced to 0 since everything is evacuated/promoted from Eden during a collection, and its target size grew to 13M. The new Eden capacity of 13M is not reserved at this point. This value is the target size of the Eden. Regions are added to Eden as the demand is made and when the added regions reach to the target size, we start the next collection. Similarly, Survivors had the occupancy of 0 bytes and it grew to 2048K after the collection. The total heap occupancy and capacity was 14M and 64M receptively before the collection and it became 9739K and 64M after the collection. Apart from the evacuation pauses, G1 also performs concurrent-marking to build the live data information of regions. 1.416: [GC pause (young) (initial-mark), 0.62417980 secs] ….... 2.042: [GC concurrent-root-region-scan-start] 2.067: [GC concurrent-root-region-scan-end, 0.0251507] 2.068: [GC concurrent-mark-start] 3.198: [GC concurrent-mark-reset-for-overflow] 4.053: [GC concurrent-mark-end, 1.9849672 sec] 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4.090: [GC concurrent-cleanup-start] 4.091: [GC concurrent-cleanup-end, 0.0002721] The first phase of a marking cycle is Initial Marking where all the objects directly reachable from the roots are marked and this phase is piggy-backed on a fully young Evacuation Pause. 2.042: [GC concurrent-root-region-scan-start] This marks the start of a concurrent phase that scans the set of root-regions which are directly reachable from the survivors of the initial marking phase. 2.067: [GC concurrent-root-region-scan-end, 0.0251507] End of the concurrent root region scan phase and it lasted for 0.0251507 seconds. 2.068: [GC concurrent-mark-start] Start of the concurrent marking at 2.068 secs from the start of the process. 3.198: [GC concurrent-mark-reset-for-overflow] This indicates that the global marking stack had became full and there was an overflow of the stack. Concurrent marking detected this overflow and had to reset the data structures to start the marking again. 4.053: [GC concurrent-mark-end, 1.9849672 sec] End of the concurrent marking phase and it lasted for 1.9849672 seconds. 4.055: [GC remark 4.055: [GC ref-proc, 0.0000254 secs], 0.0030184 secs] This corresponds to the remark phase which is a stop-the-world phase. It completes the left over marking work (SATB buffers processing) from the previous phase. In this case, this phase took 0.0030184 secs and out of which 0.0000254 secs were spent on Reference processing. 4.088: [GC cleanup 117M->106M(138M), 0.0015198 secs] Cleanup phase which is again a stop-the-world phase. It goes through the marking information of all the regions, computes the live data information of each region, resets the marking data structures and sorts the regions according to their gc-efficiency. In this example, the total heap size is 138M and after the live data counting it was found that the total live data size dropped down from 117M to 106M. 4.090: [GC concurrent-cleanup-start] This concurrent cleanup phase frees up the regions that were found to be empty (didn't contain any live data) during the previous stop-the-world phase. 4.091: [GC concurrent-cleanup-end, 0.0002721] Concurrent cleanup phase took 0.0002721 secs to free up the empty regions. Option -XX:G1PrintRegionLivenessInfo Now, let's look at the output generated with the flag G1PrintRegionLivenessInfo. This is a diagnostic option and gets enabled with -XX:+UnlockDiagnosticVMOptions. G1PrintRegionLivenessInfo prints the live data information of each region during the Cleanup phase of the concurrent-marking cycle. 26.896: [GC cleanup ### PHASE Post-Marking @ 26.896### HEAP committed: 0x02e00000-0x0fe00000 reserved: 0x02e00000-0x12e00000 region-size: 1048576 Cleanup phase of the concurrent-marking cycle started at 26.896 secs from the start of the process and this live data information is being printed after the marking phase. Committed G1 heap ranges from 0x02e00000 to 0x0fe00000 and the total G1 heap reserved by JVM is from 0x02e00000 to 0x12e00000. Each region in the G1 heap is of size 1048576 bytes. ### type address-range used prev-live next-live gc-eff### (bytes) (bytes) (bytes) (bytes/ms) This is the header of the output that tells us about the type of the region, address-range of the region, used space in the region, live bytes in the region with respect to the previous marking cycle, live bytes in the region with respect to the current marking cycle and the GC efficiency of that region. ### FREE 0x02e00000-0x02f00000 0 0 0 0.0 This is a Free region. ### OLD 0x02f00000-0x03000000 1048576 1038592 1038592 0.0 Old region with address-range from 0x02f00000 to 0x03000000. Total used space in the region is 1048576 bytes, live bytes as per the previous marking cycle are 1038592 and live bytes with respect to the current marking cycle are also 1038592. The GC efficiency has been computed as 0. ### EDEN 0x03400000-0x03500000 20992 20992 20992 0.0 This is an Eden region. ### HUMS 0x0ae00000-0x0af00000 1048576 1048576 1048576 0.0### HUMC 0x0af00000-0x0b000000 1048576 1048576 1048576 0.0### HUMC 0x0b000000-0x0b100000 1048576 1048576 1048576 0.0### HUMC 0x0b100000-0x0b200000 1048576 1048576 1048576 0.0### HUMC 0x0b200000-0x0b300000 1048576 1048576 1048576 0.0### HUMC 0x0b300000-0x0b400000 1048576 1048576 1048576 0.0### HUMC 0x0b400000-0x0b500000 1001480 1001480 1001480 0.0 These are the continuous set of regions called Humongous regions for storing a large object. HUMS (Humongous starts) marks the start of the set of humongous regions and HUMC (Humongous continues) tags the subsequent regions of the humongous regions set. ### SURV 0x09300000-0x09400000 16384 16384 16384 0.0 This is a Survivor region. ### SUMMARY capacity: 208.00 MB used: 150.16 MB / 72.19 % prev-live: 149.78 MB / 72.01 % next-live: 142.82 MB / 68.66 % At the end, a summary is printed listing the capacity, the used space and the change in the liveness after the completion of concurrent marking. In this case, G1 heap capacity is 208MB, total used space is 150.16MB which is 72.19% of the total heap size, live data in the previous marking was 149.78MB which was 72.01% of the total heap size and the live data as per the current marking is 142.82MB which is 68.66% of the total heap size. Option -XX:+G1PrintHeapRegions G1PrintHeapRegions option logs the regions related events when regions are committed, allocated into or are reclaimed. COMMIT/UNCOMMIT events G1HR COMMIT [0x6e900000,0x6ea00000]G1HR COMMIT [0x6ea00000,0x6eb00000] Here, the heap is being initialized or expanded and the region (with bottom: 0x6eb00000 and end: 0x6ec00000) is being freshly committed. COMMIT events are always generated in order i.e. the next COMMIT event will always be for the uncommitted region with the lowest address. G1HR UNCOMMIT [0x72700000,0x72800000]G1HR UNCOMMIT [0x72600000,0x72700000] Opposite to COMMIT. The heap got shrunk at the end of a Full GC and the regions are being uncommitted. Like COMMIT, UNCOMMIT events are also generated in order i.e. the next UNCOMMIT event will always be for the committed region with the highest address. GC Cycle events G1HR #StartGC 7G1HR CSET 0x6e900000G1HR REUSE 0x70500000G1HR ALLOC(Old) 0x6f800000G1HR RETIRE 0x6f800000 0x6f821b20G1HR #EndGC 7 This shows start and end of an Evacuation pause. This event is followed by a GC counter tracking both evacuation pauses and Full GCs. Here, this is the 7th GC since the start of the process. G1HR #StartFullGC 17G1HR UNCOMMIT [0x6ed00000,0x6ee00000]G1HR POST-COMPACTION(Old) 0x6e800000 0x6e854f58G1HR #EndFullGC 17 Shows start and end of a Full GC. This event is also followed by the same GC counter as above. This is the 17th GC since the start of the process. ALLOC events G1HR ALLOC(Eden) 0x6e800000 The region with bottom 0x6e800000 just started being used for allocation. In this case it is an Eden region and allocated into by a mutator thread. G1HR ALLOC(StartsH) 0x6ec00000 0x6ed00000G1HR ALLOC(ContinuesH) 0x6ed00000 0x6e000000 Regions being used for the allocation of Humongous object. The object spans over two regions. G1HR ALLOC(SingleH) 0x6f900000 0x6f9eb010 Single region being used for the allocation of Humongous object. G1HR COMMIT [0x6ee00000,0x6ef00000]G1HR COMMIT [0x6ef00000,0x6f000000]G1HR COMMIT [0x6f000000,0x6f100000]G1HR COMMIT [0x6f100000,0x6f200000]G1HR ALLOC(StartsH) 0x6ee00000 0x6ef00000G1HR ALLOC(ContinuesH) 0x6ef00000 0x6f000000G1HR ALLOC(ContinuesH) 0x6f000000 0x6f100000G1HR ALLOC(ContinuesH) 0x6f100000 0x6f102010 Here, Humongous object allocation request could not be satisfied by the free committed regions that existed in the heap, so the heap needed to be expanded. Thus new regions are committed and then allocated into for the Humongous object. G1HR ALLOC(Old) 0x6f800000 Old region started being used for allocation during GC. G1HR ALLOC(Survivor) 0x6fa00000 Region being used for copying old objects into during a GC. Note that Eden and Humongous ALLOC events are generated outside the GC boundaries and Old and Survivor ALLOC events are generated inside the GC boundaries. Other Events G1HR RETIRE 0x6e800000 0x6e87bd98 Retire and stop using the region having bottom 0x6e800000 and top 0x6e87bd98 for allocation. Note that most regions are full when they are retired and we omit those events to reduce the output volume. A region is retired when another region of the same type is allocated or we reach the start or end of a GC(depending on the region). So for Eden regions: For example: 1. ALLOC(Eden) Foo2. ALLOC(Eden) Bar3. StartGC At point 2, Foo has just been retired and it was full. At point 3, Bar was retired and it was full. If they were not full when they were retired, we will have a RETIRE event: 1. ALLOC(Eden) Foo2. RETIRE Foo top3. ALLOC(Eden) Bar4. StartGC G1HR CSET 0x6e900000 Region (bottom: 0x6e900000) is selected for the Collection Set. The region might have been selected for the collection set earlier (i.e. when it was allocated). However, we generate the CSET events for all regions in the CSet at the start of a GC to make sure there's no confusion about which regions are part of the CSet. G1HR POST-COMPACTION(Old) 0x6e800000 0x6e839858 POST-COMPACTION event is generated for each non-empty region in the heap after a full compaction. A full compaction moves objects around, so we don't know what the resulting shape of the heap is (which regions were written to, which were emptied, etc.). To deal with this, we generate a POST-COMPACTION event for each non-empty region with its type (old/humongous) and the heap boundaries. At this point we should only have Old and Humongous regions, as we have collapsed the young generation, so we should not have eden and survivors. POST-COMPACTION events are generated within the Full GC boundary. G1HR CLEANUP 0x6f400000G1HR CLEANUP 0x6f300000G1HR CLEANUP 0x6f200000 These regions were found empty after remark phase of Concurrent Marking and are reclaimed shortly afterwards. G1HR #StartGC 5G1HR CSET 0x6f400000G1HR CSET 0x6e900000G1HR REUSE 0x6f800000 At the end of a GC we retire the old region we are allocating into. Given that its not full, we will carry on allocating into it during the next GC. This is what REUSE means. In the above case 0x6f800000 should have been the last region with an ALLOC(Old) event during the previous GC and should have been retired before the end of the previous GC. G1HR ALLOC-FORCE(Eden) 0x6f800000 A specialization of ALLOC which indicates that we have reached the max desired number of the particular region type (in this case: Eden), but we decided to allocate one more. Currently it's only used for Eden regions when we extend the young generation because we cannot do a GC as the GC-Locker is active. G1HR EVAC-FAILURE 0x6f800000 During a GC, we have failed to evacuate an object from the given region as the heap is full and there is no space left to copy the object. This event is generated within GC boundaries and exactly once for each region from which we failed to evacuate objects. When Heap Regions are reclaimed ? It is also worth mentioning when the heap regions in the G1 heap are reclaimed. All regions that are in the CSet (the ones that appear in CSET events) are reclaimed at the end of a GC. The exception to that are regions with EVAC-FAILURE events. All regions with CLEANUP events are reclaimed. After a Full GC some regions get reclaimed (the ones from which we moved the objects out). But that is not shown explicitly, instead the non-empty regions that are left in the heap are printed out with the POST-COMPACTION events.

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • Windows Phone Resources from //BUILD 2013 Conference by Lee Stott

    - by Nikita Polyakov
    Originally posted on: http://geekswithblogs.net/campuskoder/archive/2013/07/02/153320.aspxLee Stott has a great summary blog post with all of the videos from the //BUILD 2013 conference that just happened last week. It’s nice because filtering to this event and finding Windows Phone sessions on Channel9 is not the best and this is a great snap shot of all of the sessions you can view from the conference in one page. Also shows that Microsoft although focused on Windows 8.1 at this event, still had a sizable presence of Windows Phone Developer topics at this event. Read the full blog post here: http://blogs.msdn.com/b/uk_faculty_connection/archive/2013/07/01/build-2013-windows-phone-resources.aspx

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >