Search Results

Search found 5963 results on 239 pages for 'rest wing'.

Page 227/239 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Javascript and Twitter API rate limitation? (Changing variable values in a loop)

    - by Pablo
    Hello, I have adapted an script from an example of http://github.com/remy/twitterlib. It´s a script that makes one query each 10 seconds to my Twitter timeline, to get only the messages that begin with a musical notation. It´s already working, but I don´t know it is the better way to do this... The Twitter API has a rate limit of 150 IP access per hour (queries from the same user). At this time, my Twitter API is blocked at 25 minutes because the 10 seconds frecuency between posts. If I set up a frecuency of 25 seconds between post, I am below the rate limit per hour, but the first 10 posts are shown so slowly. I think this way I can guarantee to be below the Twitter API rate limit and show the first 10 posts at normal speed: For the first 10 posts, I would like to set a frecuency of 5 seconds between queries. For the rest of the posts, I would like to set a frecuency of 25 seconds between queries. I think if making somewhere in the code a loop with the previous sentences, setting the "frecuency" value from 5000 to 25000 after the 10th query (or after 50 seconds, it´s the same), that´s it... Can you help me on modify this code below to make it work? Thank you in advance. var Queue = function (delay, callback) { var q = [], timer = null, processed = {}, empty = null, ignoreRT = twitterlib.filter.format('-"RT @"'); function process() { var item = null; if (q.length) { callback(q.shift()); } else { this.stop(); setTimeout(empty, 5000); } return this; } return { push: function (item) { var green = [], i; if (!(item instanceof Array)) { item = [item]; } if (timer == null && q.length == 0) { this.start(); } for (i = 0; i < item.length; i++) { if (!processed[item[i].id] && twitterlib.filter.match(item[i], ignoreRT)) { processed[item[i].id] = true; q.push(item[i]); } } q = q.sort(function (a, b) { return a.id > b.id; }); return this; }, start: function () { if (timer == null) { timer = setInterval(process, delay); } return this; }, stop: function () { clearInterval(timer); timer = null; return this; }, empty: function (fn) { empty = fn; return this; }, q: q, next: process }; }; $.extend($.expr[':'], { below: function (a, i, m) { var y = m[3]; return $(a).offset().top y; } }); function renderTweet(data) { var html = ''; html += ''; html += twitterlib.ify.clean(data.text); html += ''; since_id = data.id; return html; } function passToQueue(data) { if (data.length) { twitterQueue.push(data.reverse()); } } var frecuency = 10000; // The lapse between each new Queue var since_id = 1; var run = function () { twitterlib .timeline('twitteruser', { filter : "'?'", limit: 10 }, passToQueue) }; var twitterQueue = new Queue(frecuency, function (item) { var tweet = $(renderTweet(item)); var tweetClone = tweet.clone().hide().css({ visibility: 'hidden' }).prependTo('#tweets').slideDown(1000); tweet.css({ top: -200, position: 'absolute' }).prependTo('#tweets').animate({ top: 0 }, 1000, function () { tweetClone.css({ visibility: 'visible' }); $(this).remove(); }); $('#tweets p:below(' + window.innerHeight + ')').remove(); }).empty(run); run();

    Read the article

  • Logcat error: "addView(View, LayoutParams) is not supported in AdapterView" in a ListView

    - by HacKreatorz
    I'm doing an aplication for Android and something I need is that it shows a list of all files and directories in the SD Card and it has to be able to move through the different directories. I found a good tutorial in anddev: http://bit.ly/h4GyFC I modified a few things so the aplication moves in the SD Card and not in Android root Directories but the rest is mostly the same. This is my xml file for the activity: <?xml version="1.0" encoding="utf-8"?> <ListView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@id/android:list" android:layout_width="fill_parent" android:layout_height="fill_parent"> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" /> </ListView> And this is the code for the Activity: import hackreatorz.cifrador.R; import java.io.File; import java.util.ArrayList; import android.app.ListActivity; import android.content.res.Configuration; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.Toast; public class ArchivosSD extends ListActivity { private ArrayList<String> directoryEntries = new ArrayList<String>(); private File currentDirectory = new File("/sdcard/"); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); browseToSD(); } @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } private void browseToSD() { browseTo(new File("/sdcard/")); } private void upOneLevel() { if(this.currentDirectory.getParent() != null) this.browseTo(this.currentDirectory.getParentFile()); } private void browseTo(final File directory) { if (directory.isDirectory()) { this.currentDirectory = directory; fill(directory.listFiles()); } else { Toast.makeText(ArchivosSD.this, this.directoryEntries.get(this.getSelectedItemPosition()), Toast.LENGTH_SHORT).show(); } } private void fill(File[] files) { this.directoryEntries.clear(); this.directoryEntries.add(getString(R.string.current_dir)); if(this.currentDirectory.getParent() != null) this.directoryEntries.add(getString(R.string.up_one_level)); int currentPathStringLength = (int) this.currentDirectory.getAbsoluteFile().length(); for (File file : files) { this.directoryEntries.add(file.getAbsolutePath().substring(currentPathStringLength)); } setListAdapter(new ArrayAdapter<String>(this, R.layout.archivos_sd, this.directoryEntries)); } @Override protected void onListItemClick(ListView l, View v, int position, long id) { int selectionRowID = (int) this.getSelectedItemPosition(); String selectedFileString = this.directoryEntries.get(selectionRowID); if (selectedFileString.equals(".")) { this.browseToSD(); } else if(selectedFileString.equals("..")) { this.upOneLevel(); } else { File clickedFile = null; clickedFile = new File(this.currentDirectory.getAbsolutePath() + this.directoryEntries.get(selectionRowID)); if(clickedFile != null) this.browseTo(clickedFile); } } } I don't get any errores in Eclipse, but I get a Force Close when running the aplication on my phone and when I look at Logcat I see the following: 01-01 23:30:29.858: ERROR/AndroidRuntime(14911): FATAL EXCEPTION: main *01-01 23:30:29.858: ERROR/AndroidRuntime(14911): java.lang.UnsupportedOperationException: addView(View, LayoutParams) is not supported in AdapterView* I don't have a clue what to do, I've looked up in Google and I didn't find anything and I did the same at stackoverflow. This is my first aplication in Java and for Android so I'm a real n00b and if the answer was there, I didn't understand it so I would really apreciate if you could explain what I should do to fix this error and why. Thanks for everything in advanced.

    Read the article

  • No route matches [GET] "/user/sign_out"

    - by user3399101
    So, I'm getting the below error when clicking on Sign Out on my drop down menu on the nav: No route matches [GET] "/user/sign_out" However, this only happens when using the sign out on the drop down nav (the hamburger menu for mobile devices) and not when clicking the sign out on the regular nav. See the code below: <div class="container demo-5"> <div class="main clearfix"> <div class="column"> <div id="dl-menu" class="dl-menuwrapper"> <button class="dl-trigger">Open Menu</button> <ul class="dl-menu dl-menu-toggle"> <div id="closebtn" onclick="closebtn()"></div> <% if user_signed_in? %> <li><%= link_to 'FAQ', faq_path %></li> <li><a href="#">Contact Us</a></li> <li><%= link_to 'My Account', account_path %></li> <li><%= link_to 'Sign Out', destroy_user_session_path, method: 'delete' %></li> <--- this is the line <% else %> <li><%= link_to 'FAQ', faq_path %></li> <li><a href="#">Contact Us</a></li> <li><%= link_to 'Sign In', new_user_session_path %></li> <li><%= link_to 'Free Trial', plans_path %></li> <% end %> </ul> </div><!-- /dl-menuwrapper --> </div> </div> </div><!-- /container --> </div> And this is the non-drop down code that works: <div class="signincontainer pull-right"> <div class="navbar-form navbar-right"> <% if user_signed_in? %> <%= link_to 'Sign out', destroy_user_session_path, class: 'btn signin-button', method: :delete %> <div class="btn signin-button usernamefont"><%= link_to current_user.full_name, account_path %></div> <% else %> ....rest of code here Updated error: ActionController::RoutingError (No route matches [GET] "/user/sign_out"): actionpack (4.0.4) lib/action_dispatch/middleware/debug_exceptions.rb:21:in `call' actionpack (4.0.4) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.0.4) lib/rails/rack/logger.rb:38:in `call_app' railties (4.0.4) lib/rails/rack/logger.rb:20:in `block in call' activesupport (4.0.4) lib/active_support/tagged_logging.rb:68:in `block in tagged' activesupport (4.0.4) lib/active_support/tagged_logging.rb:26:in `tagged' activesupport (4.0.4) lib/active_support/tagged_logging.rb:68:in `tagged' railties (4.0.4) lib/rails/rack/logger.rb:20:in `call' quiet_assets (1.0.2) lib/quiet_assets.rb:18:in `call_with_quiet_assets' actionpack (4.0.4) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' rack (1.5.2) lib/rack/ru

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • "assignment makes integer from pointer without a cast " warning in c

    - by mekasperasky
    #include<stdio.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i=0,j; char a,b[20],c[30]; FILE *fp1,*fp2; c[0]='"if"; c[1]="then"; c[2]="else"; c[3]="switch"; c[4]="printf"; c[5]="scanf"; c[6]="NULL"; c[7]="int"; c[8]="char"; c[9]="float"; c[10]="long"; c[11]="double"; c[12]="char"; c[13]="const"; c[14]="continue"; c[15]="break"; c[16]="for"; c[17]="size of"; c[18]="register"; c[19]="short"; c[20]="auto"; c[21]="while"; c[22]="do"; c[23]="case"; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer fp2=fopen("lext.txt","w"); //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; while(!feof(fp1)) { a=fgetc(fp1); if(a!=' ') { b[i]=a; } else { for (j=0;j<23;j++) { if(c[j]==b) { fprintf(fp2, "%.20s\n", c[j]); continue ; } b[i]='\0'; fprintf(fp2, "%.20s\n", b); i=0; continue; } //else if //{ i=i+1; /*Switch(a) { case EOF :return eof; case '+':sym=sym+1; case '-':sym=sym+1; case '*':sym=sym+1; case '/':sym=sym+1; case '%':sym=sym+1; case ' */ } fclose(fp1); fclose(fp2); return 0; } This is my c code for lexical analysis .. its giving warnings and also not writing anything into the lext file ..

    Read the article

  • Is this a SEO SAFE anchor link

    - by Mayhem
    so... Is this a safe way to use internal links on your site.. By doing this i have the index page generating the usual php content section and handing it to the div element. THE MAIN QUESTION: Will google still index the pages using this method? Common sense tells me it does.. But just double checking and leaving this here as a base example as well if it is. As in. EXAMPLE ONLY PEOPLE The Server Side if (isset($_REQUEST['page'])) {$pageID=$_REQUEST['page'];} else {$pageID="home";} if (isset($_REQUEST['pageMode']) && $_REQUEST['pageMode']=="js") { require "content/".$pageID.".php"; exit; } // ELSE - REST OF WEBSITE WILL BE GENERATED USING THE page VARIABLE The Links <a class='btnMenu' href='?page=home'>Home Page</a> <a class='btnMenu' href='?page=about'>About</a> <a class='btnMenu' href='?page=Services'>Services</a> <a class='btnMenu' href='?page=contact'>Contact</a> The Javascript $(function() { $(".btnMenu").click(function(){return doNav(this);}); }); function doNav(objCaller) { var sPage = $(objCaller).attr("href").substring(6,255); $.get("index.php", { page: sPage, pageMode: 'js'}, function(data) { ("#siteContent").html(data).scrollTop(0); }); return false; } Forgive me if there are any errors, as just copied and pasted from my script then removed a bunch of junk to simplify it as still prototyping/white boarding the project its in. So yes it does look a little nasty at the moment. REASONS WHY: The main reason is bandwidth and speed, This will allow other scripts to run and control the site/application a little better and yes it will need to be locked down with some coding. -- FURTHER EXAMPLE-- INSERT PHP AT TOP <?php // PHP CODE HERE ?> <html> <head> <link rel="stylesheet" type="text/css" href="style.css" /> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="scripts.js"></script> </head> <body> <div class='siteBody'> <div class='siteHeader'> <?php foreach ($pageList as $key => $value) { if ($pageID == $key) {$btnClass="btnMenuSel";} else {$btnClass="btnMenu";} echo "<a class='$btnClass' href='?page=".$key."'>".$pageList[$key]."</a>"; } ?> </div><div id="siteContent" style='margin-top:10px;'> <?php require "content/".$pageID.".php"; ?> </div><div class='siteFooter'> </div> </div> </body> </html>

    Read the article

  • Someone tried to hack my Node.js server, need to understand a GET request in the logs

    - by Akay
    Alright, so I left my Node.js server alone for a while and came back to find some really interesting stuff in the logs. Apparently some moron from China or Poland tried to hack my server using directory traversal and what not, while it seems though he did not succeed I am unable understand few entries in the log. This is the output of a "hohup.out" file. The attack starts, apparently he is trying to find out some console entry in my server. All of which fail and return a 404. [90mGET /../../../../../../../../../../../ [31m500 [90m6ms - 2b[0m [90mGET /<script>alert(53416)</script> [33m404 [90m7ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /pz3yvy3lyzgja41w2sp [33m404 [90m1ms[0m [90mGET /stylesheets/style.css [33m404 [90m0ms[0m [90mGET /index.html [33m404 [90m1ms[0m [90mGET /index.htm [33m404 [90m0ms[0m [90mGET /default.html [33m404 [90m0ms[0m [90mGET /default.htm [33m404 [90m1ms[0m [90mGET /default.asp [33m404 [90m1ms[0m [90mGET /index.php [33m404 [90m0ms[0m [90mGET /default.php [33m404 [90m1ms[0m [90mGET /index.asp [33m404 [90m0ms[0m [90mGET /index.cgi [33m404 [90m0ms[0m [90mGET /index.jsp [33m404 [90m1ms[0m [90mGET /index.php3 [33m404 [90m0ms[0m [90mGET /index.pl [33m404 [90m0ms[0m [90mGET /default.jsp [33m404 [90m0ms[0m [90mGET /default.php3 [33m404 [90m0ms[0m [90mGET /index.html.en [33m404 [90m0ms[0m [90mGET /web.gif [33m404 [90m34ms[0m [90mGET /header.html [33m404 [90m1ms[0m [90mGET /homepage.nsf [33m404 [90m1ms[0m [90mGET /homepage.htm [33m404 [90m1ms[0m [90mGET /homepage.asp [33m404 [90m1ms[0m [90mGET /home.htm [33m404 [90m0ms[0m [90mGET /home.html [33m404 [90m1ms[0m [90mGET /home.asp [33m404 [90m1ms[0m [90mGET /login.asp [33m404 [90m0ms[0m [90mGET /login.html [33m404 [90m0ms[0m [90mGET /login.htm [33m404 [90m1ms[0m [90mGET /login.php [33m404 [90m0ms[0m [90mGET /index.cfm [33m404 [90m0ms[0m [90mGET /main.php [33m404 [90m1ms[0m [90mGET /main.asp [33m404 [90m1ms[0m [90mGET /main.htm [33m404 [90m1ms[0m [90mGET /main.html [33m404 [90m2ms[0m [90mGET /Welcome.html [33m404 [90m1ms[0m [90mGET /welcome.htm [33m404 [90m1ms[0m [90mGET /start.htm [33m404 [90m1ms[0m [90mGET /fleur.png [33m404 [90m0ms[0m [90mGET /level/99/ [33m404 [90m1ms[0m [90mGET /chl.css [33m404 [90m0ms[0m [90mGET /images/ [33m404 [90m0ms[0m [90mGET /robots.txt [33m404 [90m2ms[0m [90mGET /hb1/presign.asp [33m404 [90m1ms[0m [90mGET /NFuse/ASP/login.htm [33m404 [90m0ms[0m [90mGET /CCMAdmin/main.asp [33m404 [90m1ms[0m [90mGET /TiVoConnect?Command=QueryServer [33m404 [90m1ms[0m [90mGET /admin/images/rn_logo.gif [33m404 [90m1ms[0m [90mGET /vncviewer.jar [33m404 [90m1ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m7ms - 240b[0m [90mOPTIONS / [32m200 [90m1ms - 3b[0m [90mTRACE / [33m404 [90m0ms[0m [90mPROPFIND / [33m404 [90m0ms[0m [90mGET /\./ [33m404 [90m1ms[0m But here is when things start getting fishy. [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET /robots.txt [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m3ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://37.28.156.211/sprawdza.php [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.daydaydata.com/proxy.txt [33m404 [90m19ms[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m2ms[0m [90mGET / [32m200 [90m4ms - 240b[0m [90mGET http://www.google.pl/search?q=wp.pl [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.pl/search?q=onet.pl [33m404 [90m1ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.pl/search?q=ostro%C5%82%C4%99ka [33m404 [90m1ms[0m [90mGET http://www.google.pl/search?q=google [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mPOST /api/login [32m200 [90m1ms - 28b[0m [90mGET /web-console/ServerInfo.jsp [33m404 [90m2ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m10ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://proxyjudge.info [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/search?tbo=d&source=hp&num=1&btnG=Search&q=niceman [33m404 [90m2ms[0m So my questions are, how come my server is returning a "200" OK for root level domains? How did the hacker even manage to send a GET request to my server such that "http://www.google.com" shows up in the log while my server is simply an API that works on relative URLs such as "/api/login". And, while I looked up the OPTIONS, TRACE and PROPFIND HTTP requests that my server has logged it would be great if someone could explain what exactly was the hacker trying to achieve by using these verbs? Also what in the world does "[90m [32m [90m1ms - 240b[0m" mean? The "ms" makes sense, probably milliseconds for the request, rest I am unable to understand. Thank you!

    Read the article

  • Drawing using Dynamic Array and Buffer Object

    - by user1905910
    I have a problem when creating the vertex array and the indices array. I don't know what really is the problem with the code, but I guess is something with the type of the arrays, can someone please give me a light on this? #define GL_GLEXT_PROTOTYPES #include<GL/glut.h> #include<iostream> using namespace std; #define BUFFER_OFFSET(offset) ((GLfloat*) NULL + offset) const GLuint numDiv = 2; const GLuint numVerts = 9; GLuint VAO; void display(void) { enum vertex {VERTICES, INDICES, NUM_BUFFERS}; GLuint * buffers = new GLuint[NUM_BUFFERS]; GLfloat (*squareVerts)[2] = new GLfloat[numVerts][2]; GLubyte * indices = new GLubyte[numDiv*numDiv*4]; GLuint delta = 80/numDiv; for(GLuint i = 0; i < numVerts; i++) { squareVerts[i][1] = (i/(numDiv+1))*delta; squareVerts[i][0] = (i%(numDiv+1))*delta; } for(GLuint i=0; i < numDiv; i++){ for(GLuint j=0; j < numDiv; j++){ //cada iteracao gera 4 pontos #define NUM_VERT(ii,jj) ((ii)*(numDiv+1)+(jj)) #define INDICE(ii,jj) (4*((ii)*numDiv+(jj))) indices[INDICE(i,j)] = NUM_VERT(i,j); indices[INDICE(i,j)+1] = NUM_VERT(i,j+1); indices[INDICE(i,j)+2] = NUM_VERT(i+1,j+1); indices[INDICE(i,j)+3] = NUM_VERT(i+1,j); } } glGenVertexArrays(1, &VAO); glBindVertexArray(VAO); glGenBuffers(NUM_BUFFERS, buffers); glBindBuffer(GL_ARRAY_BUFFER, buffers[VERTICES]); glBufferData(GL_ARRAY_BUFFER, sizeof(squareVerts), squareVerts, GL_STATIC_DRAW); glVertexPointer(2, GL_FLOAT, 0, BUFFER_OFFSET(0)); glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[INDICES]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } void init() { glClearColor(0.0, 0.0, 0.0, 0.0); gluOrtho2D((GLdouble) -1.0, (GLdouble) 90.0, (GLdouble) -1.0, (GLdouble) 90.0); } int main(int argv, char** argc) { glutInit(&argv, argc); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutInitWindowSize(500,500); glutInitWindowPosition(100,100); glutCreateWindow("myCode.cpp"); init(); glutDisplayFunc(display); glutMainLoop(); return 0; } Edit: The problem here is that drawing don't work at all. But I don't get any error, this just don't display what I want to display. Even if I put the code that make the vertices and put them in the buffers in a diferent function, this don't work. I just tried to do this: void display(void) { glBindVertexArray(VAO); glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0,1.0,1.0); glDrawElements(GL_POINTS, 16, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); glutSwapBuffers(); } and I placed the rest of the code in display in another function that is called on the start of the program. But the problem still

    Read the article

  • Can anyone help me find why this C program work on VS2005 but not on DEV-C++

    - by user333771
    Hello to everybody..and greetings from Greece I have a C program for an exercise and it has a strange issue The program runs just fine on VS 2005 but it crashes on DEV-C++ and the problem that the problem is that the exercise is always evaluated against DEV-C++ The program is about inserting nodes to a BST and this is where the problem lies... Well i would really appreciate some help. enter code here #include <stdio.h> #include <stdlib.h> #include <malloc.h> typedef struct tree_node { int value; int weight; struct tree_node *left; struct tree_node *right; } TREE_NODE; /* The Following function creates a Binary Search Treed */ TREE_NODE *create_tree(int list[], int size); TREE_NODE *search_pos_to_insert(TREE_NODE *root, int value, int *left_or_right); /* this is the problematic function */ void inorder(TREE_NODE *root); /* Inorder Traversing */ TREE_NODE *temp; int main() { TREE_NODE *root; /* Pointer to the root of the BST */ int values[] = {10, 5, 3, 4, 1, 9, 6, 7, 8, 2}; /* Values for BST */ int size = 10, tree_weight; root = create_tree(values, 10); printf("\n"); inorder(root); /* Inorder BST*/ system("PAUSE"); } TREE_NODE *search_pos_to_insert(TREE_NODE *root, int value, int *left_or_right) { if(root !=NULL) { temp = root; if(value >root->value) { *left_or_right=1; *search_pos_to_insert(root->right, value, left_or_right); } else { *left_or_right=0; *search_pos_to_insert(root->left, value, left_or_right); } } else return temp;/* THIS IS THE PROBLEM (1) */ } TREE_NODE *create_tree(int list[], int size) { TREE_NODE *new_node_pntr, *insert_point, *root = NULL; int i, left_or_right; /* First Value of the Array is the root of the BST */ new_node_pntr = (TREE_NODE *) malloc(sizeof(TREE_NODE)); new_node_pntr->value = list[0]; /* ¸íèåóå ôçí ðñþôç ôéìÞ ôïõ ðßíáêá. */ new_node_pntr->weight = 0; new_node_pntr->left = NULL; new_node_pntr->right = NULL; root = new_node_pntr; /* Now the rest of the arrat. */ for (i = 1; i < size; i++) { insert_point = search_pos_to_insert(root, list[i], &left_or_right); /* THIS IS THE PROBLEM (2) */ /* insert_point just won't get the return from temp */ new_node_pntr = (TREE_NODE *) malloc(sizeof(TREE_NODE)); new_node_pntr->value = list[i]; new_node_pntr->weight = 0; new_node_pntr->left = NULL; new_node_pntr->right = NULL; if (left_or_right == 0) insert_point->left = new_node_pntr; else insert_point->right = new_node_pntr; } return(root); } void inorder(TREE_NODE *root) { if (root == NULL) return; inorder(root->left); printf("Value: %d, Weight: %d.\n", root->value, root->weight); inorder(root->right); }

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • Android Sqlite - obtaining the correct database row id

    - by Dan_Dan_Man
    I'm working on an app that allows the user to create notes while rehearsing a play. The user can view the notes they have created in a listview, and edit and delete them if they wish. Take for example the user creates 3 notes. In the database, the row_id's will be 1, 2 and 3. So when the user views the notes in the listview, they will also be in the order 1, 2, 3 (intially 0, 1, 2 before I increment the values). So the user can view and delete the correct row from the database. The problem arises when the user decides to delete a note. Say the user deletes the note in position 2. Thus our database will have row_id's 1 and 3. But in the listview, they will be in the position 1 and 2. So if the user clicks on the note in position 2 in the listview it should return the row in the database with row_id 3. However it tries to look for the row_id 2 which doesn't exist, and hence crashes. I need to know how to obtain the corresponding row_id, given the user's selection in the listview. Here is the code below that does this: // When the user selects "Delete" in context menu public boolean onContextItemSelected(MenuItem item) { AdapterContextMenuInfo info = (AdapterContextMenuInfo) item .getMenuInfo(); switch (item.getItemId()) { case DELETE_ID: deleteNote(info.id + 1); return true; } return super.onContextItemSelected(item); } // This method actually deletes the selected note private void deleteNote(long id) { Log.d(TAG, "Deleting row: " + id); mNDbAdapter.deleteNote(id); mCursor = mNDbAdapter.fetchAllNotes(); startManagingCursor(mCursor); fillData(); // TODO: Update play database if there are no notes left for a line. } // When the user clicks on an item, display the selected note protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); viewNote(id, "", "", true); } // This is where we display the note in a custom alert dialog. I've ommited // the rest of the code in this method because the problem lies in this line: // "mCursor = mNDbAdapter.fetchNote(newId);" // I need to replace "newId" with the row_id in the database. private void viewNote(long id, String defaultTitle, String defaultNote, boolean fresh) { final int lineNumber; String title; String note; id++; final long newId = id; Log.d(TAG, "Returning row: " + newId); mCursor = mNDbAdapter.fetchNote(newId); lineNumber = (mCursor.getInt(mCursor.getColumnIndex("number"))); title = (mCursor.getString(mCursor.getColumnIndex("title"))); note = (mCursor.getString(mCursor.getColumnIndex("note"))); . . . } Let me know if you would like me to show anymore code. It seems like something so simple but I just can't find a solution. Thanks!

    Read the article

  • Optimizing JS Array Search

    - by The.Anti.9
    I am working on a Browser-based media player which is written almost entirely in HTML 5 and JavaScript. The backend is written in PHP but it has one function which is to fill the playlist on the initial load. And the rest is all JS. There is a search bar that refines the playlist. I want it to refine as the person is typing, like most media players do. The only problem with this is that it is very slow and laggy as there are about 1000 songs in the whole program and there is likely to be more as time goes on. The original playlist load is an ajax call to a PHP page that returns the results as JSON. Each item has 4 attirbutes: artist album file url I then loop through each object and add it to an array called playlist. At the end of the looping a copy of playlist is created, backup. This is so that I can refine the playlist variable when people refine their search, but still repopulated it from backup without making another server request. The method refine() is called when the user types a key into the searchbox. It flushes playlist and searches through each property (not including url) of each object in the backup array for a match in the string. If there is a match in any of the properties, it appends the information to a table that displays the playlist, and adds it to the object to playlist for access by the actual player. Code for the refine() method: function refine() { $('#loadinggif').show(); $('#library').html("<table id='libtable'><tr><th>Artist</th><th>Album</th><th>File</th><th>&nbsp;</th></tr></table>"); playlist = []; for (var j = 0; j < backup.length; j++) { var sfile = new String(backup[j].file); var salbum = new String(backup[j].album); var sartist = new String(backup[j].artist); if (sfile.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || salbum.toLowerCase().search($('#search').val().toLowerCase()) !== -1 || sartist.toLowerCase().search($('#search').val().toLowerCase()) !== -1) { playlist.push(backup[j]); num = playlist.length-1; $("<tr></tr>").html("<td>" + num + "</td><td>" + sartist + "</td><td>" + salbum + "</td><td>" + sfile + "</td><td><a href='#' onclick='setplay(" + num +");'>Play</a></td>").appendTo('#libtable'); } } $('#loadinggif').hide(); } As I said before, for the first couple of letters typed, this is very slow and laggy. I am looking for ways to refine this to make it much faster and more smooth.

    Read the article

  • Rewrite C++ code into Objective C

    - by Phil_M
    Hello I got some C++ Sourcecode that I would like to rewrite into Objective C. It would help me alot if someone could write me a header file for this Code. When I get the Headerfile I would be able to rewrite the rest of the Sourcecode. It would be very nice if someone could help me please. Thanks I will poste the sourcecode here: #include <stdlib.h> #include <iostream.h> #define STATES 5 int transitionTable[STATES][STATES]; // function declarations: double randfloat (void); int chooseNextEventFromTable (int current, int table[STATES][STATES]); int chooseNextEventFromTransitionTablee (int currentState); void setTable (int value, int table[STATES][STATES]); ////////////////////////////////////////////////////////////////////////// int main(void) { int i; // for demo purposes: transitionTable[0][0] = 0; transitionTable[0][1] = 20; transitionTable[0][2] = 30; transitionTable[0][3] = 50; transitionTable[0][4] = 0; transitionTable[1][0] = 35; transitionTable[1][1] = 25; transitionTable[1][2] = 20; transitionTable[1][3] = 30; transitionTable[1][4] = 0; transitionTable[2][0] = 70; transitionTable[2][1] = 0; transitionTable[2][2] = 15; transitionTable[2][3] = 0; transitionTable[2][4] = 15; transitionTable[3][0] = 0; transitionTable[3][1] = 25; transitionTable[3][2] = 25; transitionTable[3][3] = 0; transitionTable[3][4] = 50; transitionTable[4][0] = 13; transitionTable[4][1] = 17; transitionTable[4][2] = 22; transitionTable[4][3] = 48; transitionTable[4][4] = 0; int currentState = 0; for (i=0; i<10; i++) { std::cout << currentState << " "; currentState = chooseNextEventFromTransitionTablee(currentState); } return 0; }; ////////////////////////////////////////////////////////////////////////// ////////////////////////////// // // chooseNextEventFromTransitionTable -- choose the next note. // int chooseNextEventFromTransitionTablee(int currentState) { int targetSum = 0; int sum = 0; int targetNote = 0; int totalevents = 0; int i; currentState = currentState % STATES; // remove any octave value for (i=0; i<STATES; i++) { totalevents += transitionTable[currentState][i]; } targetSum = (int)(randfloat() * totalevents + 0.5); while (targetNote < STATES && sum+transitionTable[currentState][targetNote] < targetSum) { sum += transitionTable[currentState][targetNote]; targetNote++; } return targetNote; } ////////////////////////////// // // randfloat -- returns a random number between 0.0 and 1.0. // double randfloat(void) { return (double)rand()/RAND_MAX; } ////////////////////////////// // // setTable -- set all values in the transition table to the given value. // void setTable(int value, int table[STATES][STATES]) { int i, j; for (i=0; i<STATES; i++) { for (j=0; j<STATES; j++) { table[i][j] = value; } } }

    Read the article

  • Object reference not set to an instance of an object- Linked List Example

    - by Zoro Roronoa
    I am seeing following errors : Object reference not set to an instance of an object! Check to determinate if the object is null before calling the method! I'am new with C#,and I made a program for Sorted Linked Lists. Here is the code where the error comes! public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } It says that the current referenc is null while (data current.DData && current != null), but I assigned it current = first; Please Help ! The rest is the complete code of the Program! class Link { double dData; Link next=null; public Link Next { get { return next; } set { next = value; } } public double DData { get { return dData; } set { dData = value; } } public Link(double dData) { this.dData = dData; } public void DisplayLink() { Console.WriteLine("Link : "+ dData); } } class SortedList { Link first; public SortedList() { first = null; } public bool IsEmpty() { return (this.first == null); } public void Insert(double data) { Link newLink = new Link(data); Link current = first; Link previous = null; if (first == null) { first = newLink; } else { while (data > current.DData && current != null) { previous = current; current = current.Next; } previous.Next = newLink; newLink.Next = current; } } public Link Remove() { Link temp = first; first = first.Next; return temp; } public void DisplayList() { Link current; current = first; Console.WriteLine("Display the List!"); while (current != null) { current.DisplayLink(); current = current.Next; } } } class SortedListApp { public void TestSortedList() { SortedList newList = new SortedList(); newList.Insert(20); newList.Insert(22); newList.Insert(100); newList.Insert(1000); newList.Insert(15); newList.Insert(11); newList.DisplayList(); newList.Remove(); newList.DisplayList(); } }

    Read the article

  • Replace html element with data in javascript

    - by Ultimate
    I trying to auto increment the serial number when row increases and automatically readjust the numbering order when a row gets deleted in javascript. For that I am using the following clone method for doing my task. Rest of the thing is working correct except its not increasing the srno because its creating the clone of it. Following is my code for this: function addCloneRow(obj) { if(obj) { var tBody = obj.parentNode.parentNode.parentNode; var trTable = tBody.getElementsByTagName("tr")[1]; var trClone = trTable.cloneNode(true); if(trClone) { var txt = trClone.getElementsByTagName("input"); var srno = trClone.getElementsByTagName("span"); var dd = trClone.getElementsByTagName("select"); text = tBody.getElementsByTagName("tr").length; alert(text) //here i am getting the srno in increasing order //I tried something like following but not working //var ele = srno.replace(document.createElement("h1"), srno); //alert(ele); for(var i=0; i<dd.length; i++) { dd[i].options[0].selected=true; var nm = dd[i].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); dd[i].name = nNm+"[]"; } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } for(var j=0; j<txt.length; j++) { var nm = txt[j].name; var nNm = nm.substring((nm.indexOf("_")+1),nm.indexOf("[")); txt[j].name = nNm+"[]"; if(txt[j].type == "hidden"){ txt[j].value = "0"; }else if(txt[j].type == "text") { txt[j].value = ""; }else if(txt[j].type == "checkbox") { txt[j].checked = false; } } tBody.insertBefore(trClone,tBody.childNodes[1]); } } } Following is my html : <table id="step_details" style="display:none;"> <tr> <th width="5">#</th> <th width="45%">Step details</th> <th>Expected Results</th> <th width="25">Execution</th> <th><img src="gui/themes/default/images/ico_add.gif" onclick="addCloneRow(this);"/></th> </tr> <tr> <td><span>1</span></td> <td><textArea name="step_details[]"></textArea></td> <td><textArea name="expected_results[]"></textArea></td> <td><select onchange="content_modified = true" name="exec_type[]"> <option selected="selected" value="1" label="Manual">Manual</option> <option value="2" label="Automated">Automated</option> </select> </td> <td><img src="gui/themes/default/images/ico_del.gif" onclick="removeCloneRow(this);"/></td> </tr> </table> I want to change the srno. of span element dynamically after increment and decrement on it. Need help thanks

    Read the article

  • Java: Object Array assignment in for loop

    - by Hackster
    I am trying to use Dijkstra's algorithm to find the shortest path from a specific vertex (v0) to the rest of them. That is solved and works well with this code from this link below: http://en.literateprograms.org/index.php?title=Special:DownloadCode/Dijkstra%27s_algorithm_(Java)&oldid=15444 I am having trouble with assigning the Edge array in a for loop from the user input, as opposed to hard-coding it like it is here. Any help assigning a new edge to Edge[] adjacencies from each vertex? Keeping in mind it could be 1 or multiple edges. class Vertex implements Comparable<Vertex> { public final String name; public Edge[] adjacencies; public double minDistance = Double.POSITIVE_INFINITY; public Vertex previous; public Vertex(String argName) { name = argName; } public String toString() { return name; } public int compareTo(Vertex other){ return Double.compare(minDistance, other.minDistance); } } class Edge{ public final Vertex target; public final double weight; public Edge(Vertex argTarget, double argWeight){ target = argTarget; weight = argWeight; } } public static void main(String[] args) { Vertex v[] = new Vertex[3]; Vertex v[0] = new Vertex("Harrisburg"); Vertex v[1] = new Vertex("Baltimore"); Vertex v[2] = new Vertex("Washington"); v0.adjacencies = new Edge[]{ new Edge(v[1], 1), new Edge(v[2], 3) }; v1.adjacencies = new Edge[]{ new Edge(v[0], 1), new Edge(v[2], 1),}; v2.adjacencies = new Edge[]{ new Edge(v[0], 3), new Edge(v[1], 1) }; Vertex[] vertices = { v0, v1, v2}; /*Three vertices with weight: V0 connects (V1,1),(V2,3) V1 connects (V0,1),(V2,1) V2 connects (V1,1),(V2,3) */ computePaths(v0); for (Vertex v : vertices){ System.out.println("Distance to " + v + ": " + v.minDistance); List<Vertex> path = getShortestPathTo(v); System.out.println("Path: " + path); } } } The above code works well in finding the shortest path from v0 to all the other vertices. The problem occurs when assigning the new edge[] to edge[] adjacencies. For example this does not produce the correct output: for (int i = 0; i < total_vertices; i++){ s = br.readLine(); char[] line = s.toCharArray(); for (int j = 0; j < line.length; j++){ if(j % 4 == 0 ){ //Input: vertex weight vertex weight: 1 1 2 3 int vert = Integer.parseInt(String.valueOf(line[j])); int w = Integer.parseInt(String.valueOf(line[j+2])); v[i].adjacencies = new Edge[] {new Edge(v[vert], w)}; } } } As opposed to this: v0.adjacencies = new Edge[]{ new Edge(v[1], 1), new Edge(v[2], 3) }; How can I take the user input and make an Edge[], to pass it to adjacencies? The problem is it could be 0 edges or many. Any help would be much appreciated Thanks!

    Read the article

  • my Search method is coming up with all nulls

    - by Epic.Distortion
    Let me give a quick explanation. I took a 5 week course through a company on Java in July. They covered basic stuff, like console app, crud operations, mysql, and n-tier architecture. Since the course ended I didn't use it much because I went back to work, and other medical reasons surfaced....blah blah. I was told by the company to make a simple program to reflect what I learned. Turns out I retained very little. I decided to make a video game starage program. It would be used to stare your video games so you wouldn't have to search your bookcase(or how ever you store your games.) It is a basic console app using the crud operations with MYSQL. I can't get my search function to actually work. I have 2 layers a Presentation layer and a Logic layer. The search method allows them to search for a game by the title. when i bring run the program and use Search it only displays the title and the rest is null. here is my Presentation layer: private static Games SearchForGame() { Logic aref = new Logic(); Games g = new Games(); Scanner scanline = new Scanner(System.in); System.out.println("Please enter the name of the game you wish to find:"); g.setTitle(scanline.nextLine()); aref.SearchGame(); System.out.println(); System.out.println("Game Id: " + g.getGameId()); System.out.println("Title: " + g.getTitle()); System.out.println("Rating: " + g.getRating()); System.out.println("Platform: "+ g.getPlatform()); System.out.println("Developer: "+ g.getDeveloper()); return g; } and here is my logic layer public Games SearchGame() { Games g = new Games(); try { Class.forName(driver).newInstance(); Connection conn = DriverManager.getConnection(url+dbName,userName,password); java.sql.PreparedStatement statement = conn.prepareStatement("SELECT GameId,Title,Rating,Platform,Developer FROM games WHERE Title=?"); statement.setString(1, g.getTitle()); ResultSet rs = statement.executeQuery(); while(rs.next()){ g.setGameId(rs.getInt("GameId")); g.setTitle(rs.getString("Title")); g.setRating(rs.getString("Rating")); g.setPlatform(rs.getString("Platform")); g.setDeveloper(rs.getString("Developer")); statement.executeUpdate(); } } catch (Exception e) { e.printStackTrace(); } return g; } here is also my last results Please enter the name of the game you wish to find: Skyrim Game Id: 0 Title: Skyrim Rating: null Platform: null Developer: null any help would be greatly appreciated and thanks in advance EDIT: here is my code for my games class public class Games { public int GameId; public String Title; public String Rating; public String Platform; public String Developer; public int getGameId() { return GameId; } public int setGameId(int gameId) { return GameId = gameId; } public String getTitle() { return Title; } public String setTitle(String title) { return Title = title; } public String getRating() { return Rating; } public void setRating(String rating) { Rating = rating; } public String getPlatform() { return Platform; } public void setPlatform(String platform) { Platform = platform; } public String getDeveloper() { return Developer; } public void setDeveloper(String developer) { Developer = developer; } }

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • CodePlex Daily Summary for Wednesday, February 24, 2010

    CodePlex Daily Summary for Wednesday, February 24, 2010New ProjectsADO.Net DataSets to ExtJs.data.Store: A JavaScript (and C#) based project to reduce the amount of client-side code necessary to consume ADO.Net / ASP.Net web services when using ExtJS.AMP.Net Wrapper: AMP is a platform to build on-line marketplaces (http://www.poweredbyamp.com). AMP.Net provided Object-Like interaction with AMP's restful service...ArkSwitch: ArkSwitch is an easy to use, finger-friendly task manager for Windows Mobile 6.5.3 (with a WM6.5 compatibility mode). It is developed mainly in C#,...Biffen: Cinema-booking project in Computer Science at University College Nordjylland, Denmark.Braintree Client Library: Client library for integrating with the Braintree Gateway.Business Framework: A framework which helps building business applications. It provides business rules, validation rules and a text-based language for writing rules. I...Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.ChoServiceHost: Simple and easy way to create and host Windows Service Applications in .NET 3.5/Visual Studio 2008Delta College Game Development Project: Project site for cs 16 game development classDotNetNuke® Labs: DotNetNuke Labs is a collection of "research & development" type projects for the DotNetNuke platform.Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): This is a generic web part for hosting Silverlight content on WSS 30 and MOSS 2007 sites. The objective of this web part was to make it easy for us...GpTiming: GpTiming is a simple "lab" application related to race events, based on a Domain Model.HTML Forms in Windows Forms: As the names suggests this code library is designed to introduce HTML code (primarily form code) into Windows Forms. It was created because standar...imgur uploader - .net open source uploader for image sharing site imgur: Imgur uploader strives to be an easy to use uploader for images you would like to share with friends and family. It is written in c#.kuuy static system: kuuy static system is a full static publish website system!LaTeX Grapher: The goal of this project is to make a tool that facilitates making high quality two dimensional vector graphic function plots with a minimal amount...LightREST: A .NET library to consume REST-based HTTP services.Machiavelli: Machiavelli is Stackoverflow inspired project that I am working on following Andrew Siemer's article on DotNetSlackers. Mover: Mover makes it easier for developers to create programmatic animations in Silverlight. It provides an expressive API to the platform's underlying S...MVC Presenter: ASP.NET MVC 2で作るプレゼンビューアーnHibernate Attribute mapping: How to use Attibute mapping with a ManyToMany Relationship with nHibernateNIPO Data Processing Component Framework: NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes...PowerShell Remote File Explorer: This project intends to develop a Windows forms based file explorer to browse/transfer files over PowerShell 2.0 remoting channel. The file transfe...Process Flow Tracking of Biomass Distribution Project (University of Mumbai): At Larsen & Toubro Infotech India Ltd., my team worked on a SCM (Supply Chain Management) based project titled 'Process Flow Tracking of Biomass Di...VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1Yicker: a microblog program devolep by c#.New ReleasesADO.Net DataSets to ExtJs.data.Store: Ext.net: This is the first version of Ext.net. This version contains a single class, Ext.net.Store which extends the Ext.data.Store class to consume ADO.Ne...AMP.Net Wrapper: AMP.Net v1.0: Provides abstraction for all the product search functionality offered by AMP.ArkSwitch: ArkSwitch legacy versions: Old versions - no need to download themArkSwitch: ArkSwitch v1.1.0: ArkSwitch v1.1.0Braintree Client Library: Braintree 1.0.0: Braintree .NET client library 1.0.0Business Framework: BusinessFramework preview: Early preview bits. See Rules for a sample.Business Framework: Samples: SamplesCC.Votd: CC.Votd 1.0.10.224: This is the initial release of CC.Votd. Marking as beta since I'm the only one who has used it up to this point.ChoServiceHost: ChoServiceHost.msi: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Installer)ChoServiceHost: ChoServiceHost-Src.zip: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Source Files)CHS Extranet: Beta 2.4: Beta 2.4 Release: Change Log: Added HTML preview options for XLS, XLSX, DOCX File Changes: ~/MyComputer.aspx ~/mycomputer.css ~/basestyle.css...Composure: AvalonDock-55751-VS2010.NET4: This is a "convenience build" of AvalonDock (drop 55751) for VIsual Studio 2010 and .NET 4.0. Nothing has been altered in the source code (which ...Data Access Component: Version 2.6: Add LINQ support.Desktop Google Reader: 1.3 Beta 1: New features: Read it Later included (see http://readitlaterlist.com/) Liking added (working: see number of liking users, see if liking yourself,...Explorer Plus: Explorer Plus v0.3: Amazon Locales AddedFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 Released: Hi, Today we have released the final version of Visifire v3.0.3 which contains the following major features: * DataBinding. * IndicatorEn...Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): CTP: The objective of this release was to gather feedback from the wider community. I intend to pursue further development and make fixes wherever appro...HTML Forms in Windows Forms: HTMLForms 1.0: First Release.imgur uploader - .net open source uploader for image sharing site imgur: Release 2010-02-23-01: This is the first codeplex release! Let mayhem commence...Jeremi Stadler: Stick Tops 2.5: Sticktops is a very light program that makes it easy to paste stuff on small notes on the screen. All notes you have is saved on a server so you ca...kuuy static system: kss_v1.0beta sql: kss_v1.0beta sql scripts sourceMDownloader: MDownloader-0.15.2.55998: Fixed detecting uploading.com dead links; Added hiding rss entries without files;Mover: MoverLib for Silverlight 3: A first version of MoverLib for Silverlight 3.nHibernate Attribute mapping: 1.0: Source CodenHibernate Attribute mapping: Download 1: Zip fileNodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.113: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.113: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version inclu...OAuthLib: OAuthLib (1.6.0.0): Difference between previous version is as next. 7079 Make it possible to pass factory method of request in ObtainUnauthorizedRequestToken and Reque...patterns & practices SharePoint Guidance: SPG2010 Drop 5: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PowerShell Remote File Explorer: PSRemoteExplorer 0.1: This release is the initial release of PowerShell remote file explorer. This enables the basic functionality of a remote file explorer. This also p...Reusable Library: v1.0.3: A collection of reusable abstractions for enterprise application developer.SharePoint Outlook Connector: Version 1.0.2.4: Version 1.0.2.4 Minor bugs have been fixed.Silverlight Server File Manager: First production release: This release is in production. Release on change set 37268.SIMD Detector: 2nd Release: Released C/CLI assembly project for use in CSharp and VB. Tested in CSharp console application. A Windows Form application coming soon. Projects ma...Source Analysis Policy: Source Analysis Policy v1.1 SP1: This release contains the compiled, and signed binaries in an installation package. This package also registers the policy with Microsoft Visual St...SpecExpress : A Fluent Validation Framework: SpecExpress 1.1: UpdatesAdded Validation Contexts feature Fixed bug with handling for Bool Types and Required MessageStore now allows for overriding individual ...VCC: Latest build, v2.1.30223.0: Automatic drop of latest buildVS2010 Rc1 Fix: RC1Fix01: This is a very simple project implementing a Microsoft Walkthrough at http://msdn.microsoft.com/en-us/library/wdb4eb30%28VS.100%29.aspx and the man...WPF AutoComplete TextBox Control: version 1.0: Initial releaseMost Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkAccelerators for Microsoft Dynamics CRMWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETInfoServiceNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2SharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryXcoordination Application Space

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >