Search Results

Search found 5133 results on 206 pages for 'max krug'.

Page 190/206 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • Flex Air HTMLLoader blank pop up window when flash content is loaded

    - by user128938
    I have a flex Air program that loads external content with the HTMLLoader. Now for some reason whenever I load a page that has any flash content a blank system window pops up outside of my program. It's completely blank, all white with min, max and close buttons. If I close it any flash content I loaded stops working. For the life of my I can't figure out what's happening and there's no messages in the console and no title for the window. Does anyone have any ideas? I appreciate any help you can give. Here's the code I'm using: private var webPage:HTMLLoader; private function registerEvents():void { this.addEventListener(gameLoadEvent.GAME_LOAD, gameLoad); //webPage = new HTMLLoader(); } //function called back from Game Command to load correct game private function gameLoad(event:Event):void { var gameEvent:gameLoadEvent = event as gameLoadEvent; loadgame(gameEvent.url, gameEvent.variables); } private function loadgame(url:String, variableString:String):void { DesktopModelLocator.getInstance().scaleX = 1; DesktopModelLocator.getInstance().scaleY = 1; //var url:String = "http://pro-us.sbt-corp.com/aspx/member/LaunchGame.aspx"; var request:URLRequest = new URLRequest(url); //var variables:URLVariables = new URLVariables("gameNum=17&as=as1&t=demo&package=a&btnQuit=0"); if(variableString != null && variableString != ""){ var variables:URLVariables = new URLVariables(variableString); variables.exampleSessionId = new Date().getTime(); variables.exampleUserLabel = "guest"; request.data = variables; } webPage = HTMLLoader.createRootWindow(true, null, true, null); webPage.height = systemManager.stage.nativeWindow.height - 66; webPage.width = systemManager.stage.nativeWindow.width; webPage.load(request); webPage.navigateInSystemBrowser = false; flexBrowser.addChild(webPage); } ]]> </mx:Script> <mx:HTML id="flexBrowser" width="1366" height="658" backgroundAlpha="0.45" creationComplete="registerEvents();" x="0" y="0"> </mx:HTML>

    Read the article

  • R: Plotting a graph with different colors of points based on advanced criteria

    - by balconydoor
    What I would like to do is a plot (using ggplot), where the x axis represent years which have a different colour for the last three years in the plot than the rest. The last three years should also meet a certain criteria and based on this the last three years can either be red or green. The criteria is that the mean of the last three years should be less (making it green) or more (making it red) than the 66%-percentile of the remaining years. So far I have made two different functions calculating the last three year mean: LYM3 <- function (x) { LYM3 <- tail(x,3) mean(LYM3$Data,na.rm=T) } And the 66%-percentile for the remaining: perc66 <- function(x) { percentile <- head(x,-3) quantile(percentile$Data, .66, names=F,na.rm=T) } Here are two sets of data that can be used in the calculations (plots), the first which is an example from my real data where LYM3(df1) < perc66(df1) and the second is just made up data where LYM3 perc66. df1<- data.frame(Year=c(1979:2010), Data=c(347261.87, 145071.29, 110181.93, 183016.71, 210995.67, 205207.33, 103291.78, 247182.10, 152894.45, 170771.50, 206534.55, 287770.86, 223832.43, 297542.86, 267343.54, 475485.47, 224575.08, 147607.81, 171732.38, 126818.10, 165801.08, 136921.58, 136947.63, 83428.05, 144295.87, 68566.23, 59943.05, 49909.08, 52149.11, 117627.75, 132127.79, 130463.80)) df2 <- data.frame(Year=c(1979:2010), Data=c(sample(50,29,replace=T),75,75,75)) Here’s my code for my plot so far: plot <- ggplot(df1, aes(x=Year, y=Data)) + theme_bw() + geom_point(size=3, aes(colour=ifelse(df1$Year<2008, "black",ifelse(LYM3(df1) < perc66(df1),"green","red")))) + geom_line() + scale_x_continuous(breaks=c(1980,1985,1990,1995,2000,2005,2010), limits=c(1978,2011)) plot As you notice it doesn’t really do what I want it to do. The only thing it does seem to do is that it turns the years before 2008 into one level and those after into another one and base the point colour off these two levels. Since I don’t want this year to be stationary either, I made another tiny function: fun3 <- function(x) { df <- subset(x, Year==(max(Year)-2)) df$Year } So the previous code would have the same effect as: geom_point(size=3, aes(colour=ifelse(df1$Year<fun3(df1), "black","red"))) But it still does not care about my colours. Why does it make the years into levels? And how come an ifelse function doesn’t work within another one in this case? How would it be possible to the arguments to do what I like? I realise this might be a bit messy, asking for a lot at the same time, but I hope my description is pretty clear. It would be helpful if someone could at least point me in the right direction. I tried to put the code for the plot into a function as well so I wouldn’t have to change the data frame at all functions within the plot, but I can’t get it to work. Thank you!

    Read the article

  • How to manually (bitwise) perform (float)x? (homework)

    - by Silver
    Now, here is the function header of the function I'm supposed to implement: /* * float_from_int - Return bit-level equivalent of expression (float) x * Result is returned as unsigned int, but * it is to be interpreted as the bit-level representation of a * single-precision floating point values. * Legal ops: Any integer/unsigned operations incl. ||, &&. also if, while * Max ops: 30 * Rating: 4 */ unsigned float_from_int(int x) { ... } We aren't allowed to do float operations, or any kind of casting. Now I tried to implement the first algorithm given at this site: http://locklessinc.com/articles/i2f/ Here's my code: unsigned float_from_int(int x) { // grab sign bit int xIsNegative = 0; int absValOfX = x; if(x < 0){ xIsNegative = 1; absValOfX = -x; } // zero case if(x == 0){ return 0; } //int shiftsNeeded = 0; /*while(){ shiftsNeeded++; }*/ unsigned I2F_MAX_BITS = 15; unsigned I2F_MAX_INPUT = ((1 << I2F_MAX_BITS) - 1); unsigned I2F_SHIFT = (24 - I2F_MAX_BITS); unsigned result, i, exponent, fraction; if ((absValOfX & I2F_MAX_INPUT) == 0) result = 0; else { exponent = 126 + I2F_MAX_BITS; fraction = (absValOfX & I2F_MAX_INPUT) << I2F_SHIFT; i = 0; while(i < I2F_MAX_BITS) { if (fraction & 0x800000) break; else { fraction = fraction << 1; exponent = exponent - 1; } i++; } result = (xIsNegative << 31) | exponent << 23 | (fraction & 0x7fffff); } return result; } But it didn't work (see test error below): Test float_from_int(-2147483648[0x80000000]) failed... ...Gives 0[0x0]. Should be -822083584[0xcf000000] 4 4 0 float_times_four I don't know where to go from here. How should I go about parsing the float from this int?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • I get an `Cannot read property 'slice' of undefined` message when I use the scrollTo jQuery plugin inside this function

    - by alexchenco
    I'm using the jQuery scrollTo plugin. I get this error in my JS Console: 16827Uncaught TypeError: Cannot read property 'slice' of undefined d.fn.scrollToindex.html.js:16827 jQuery.extend.eachindex.html.js:662 d.fn.scrollToindex.html.js:16827 jQuery.extend.eachindex.html.js:662 jQuery.fn.jQuery.eachindex.html.js:276 d.fn.scrollToindex.html.js:16827 popupPlaceindex.html.js:18034 (anonymous function)index.html.js:17745 jQuery.extend._Deferred.deferred.resolveWithindex.html.js:1018 doneindex.html.js:7247 jQuery.ajaxTransport.send.script.onload.script.onreadystatechange When I place $(".menu").scrollTo( $("li.matched").attr("id"), 800 ); inside it. function popupPlace(dict) { $popup = $('div#dish-popup'); $popup.render(dict,window.dishPopupTemplate); if(typeof(dict.dish) === 'undefined') { $popup.addClass('place-only'); } else { $popup.removeClass('place-only'); } var $place = $('div#dish-popup div.place'); var place_id = dict.place._id; if(liked[place_id]) { $place.addClass('liked'); } else { $place.removeClass('liked'); } if(dict.place.likes) { $place.addClass('has-likes'); } else { $place.addClass('zero-likes'); } var tokens = window.currentSearchTermTokens; var tokenRegex = tokens && new RegExp($.map(tokens, RegExp.escape).join('|'), 'gi'); $.each(dict.place.products, function(n, product) { $product = $('#menu-item-'+product.id); if(liked[place_id+'/'+product.id]) { $product.addClass('liked'); } if(tokens && matchesDish(product, tokens)) { $product.addClass('matched'); $product.highlight(tokenRegex); } else { $product.removeClass('matched'); $product.removeHighlight(); } if(product.likes) { $product.addClass('has-likes'); } else { $product.addClass('zero-likes'); } }); $('#overlay').show(); $('#dish-popup-container').show(); // Scroll to matched dish //$("a#scrolll").attr("href", "#" + $("li.matched").attr("id")); //$("a#scrolll").attr("href", "#" + $("li.matched").attr("id")); //$("a#scrolll").trigger("click"); $(".menu").scrollTo( $("li.matched").attr("id"), 800 ); // Hide dish results on mobile devices to prevent having a blank space at the bottom of the site if (Modernizr.mq('only screen and (max-width: 640px)')) { $('ol.results').hide(); } $(".close-dish-popup").click(function() { $("#overlay").hide(); $("#dish-popup-container").hide(); $('ol.results').show(); changeState({}, ['dish', 'place', 'serp']); }); showPopupMap(dict.place, "dish-popup-map"); } Any suggestion to fix this?

    Read the article

  • c# video equivalent to image.fromstream? Or changing the scope of the following script to allow vide

    - by Daniel
    The following is a part of an upload class in a c# script. I'm a php programmer, I've never messed with c# much but I'm trying to learn. This upload script will not handle anything except images, I need to adapt this class to handle other types of media also, or rewrite it all together. If I'm correct, I realize that using (Image image = Image.FromStream(file.InputStream)) basically says that the scope of the following is Image, only an image can be used or the object is discarded? And also that the variable image is being created from an Image from the file stream, which I understand to be, like... the $_FILES array in php? I dunno, I don't really care about making thumbnails right now either way, so if this can be taken out and still process the upload I'm totally cool with that, I just haven't had any luck getting this thing to take anything but images, even when commenting out that whole part of the class... protected void Page_Load(object sender, EventArgs e) { string dir = Path.Combine(Request.PhysicalApplicationPath, "files"); if (Request.Files.Count == 0) { // No files were posted Response.StatusCode = 500; } else { try { // Only one file at a time is posted HttpPostedFile file = Request.Files[0]; // Size limit 100MB if (file.ContentLength > 102400000) { // File too large Response.StatusCode = 500; } else { string id = Request.QueryString["userId"]; string[] folders = userDir(id); foreach (string folder in folders) { dir = Path.Combine(dir, folder); if (!Directory.Exists(dir)) Directory.CreateDirectory(dir); } string path = Path.Combine(dir, String.Concat(Request.QueryString["batchId"], "_", file.FileName)); file.SaveAs(path); // Create thumbnail int dot = path.LastIndexOf('.'); string thumbpath = String.Concat(path.Substring(0, dot), "_thumb", path.Substring(dot)); using (Image image = Image.FromStream(file.InputStream)) { // Find the ratio that will create maximum height or width of 100px. double ratio = Math.Max(image.Width / 100.0, image.Height / 100.0); using (Image thumb = new Bitmap(image, new Size((int)Math.Round(image.Width / ratio), (int)Math.Round(image.Height / ratio)))) { using (Graphics graphic = Graphics.FromImage(thumb)) { // Make sure thumbnail is not crappy graphic.SmoothingMode = SmoothingMode.HighQuality; graphic.InterpolationMode = InterpolationMode.High; graphic.CompositingQuality = CompositingQuality.HighQuality; // JPEG ImageCodecInfo codec = ImageCodecInfo.GetImageEncoders()[1]; // 90% quality EncoderParameters encode = new EncoderParameters(1); encode.Param[0] = new EncoderParameter(Encoder.Quality, 90L); // Resize graphic.DrawImage(image, new Rectangle(0, 0, thumb.Width, thumb.Height)); // Save thumb.Save(thumbpath, codec, encode); } } } // Success Response.StatusCode = 200; } } catch { // Something went wrong Response.StatusCode = 500; } } }

    Read the article

  • Help with code optimization

    - by Ockonal
    Hello, I've written a little particle system for my 2d-application. Here is raining code: // HPP ----------------------------------- struct Data { float x, y, x_speed, y_speed; int timeout; Data(); }; std::vector<Data> mData; bool mFirstTime; void processDrops(float windPower, int i); // CPP ----------------------------------- Data::Data() : x(rand()%ScreenResolutionX), y(0) , x_speed(0), y_speed(0), timeout(rand()%130) { } void Rain::processDrops(float windPower, int i) { int posX = rand() % mWindowWidth; mData[i].x = posX; mData[i].x_speed = WindPower*0.1; // WindPower is float mData[i].y_speed = Gravity*0.1; // Gravity is 9.8 * 19.2 // If that is first time, process drops randomly with window height if (mFirstTime) { mData[i].timeout = 0; mData[i].y = rand() % mWindowHeight; } else { mData[i].timeout = rand() % 130; mData[i].y = 0; } } void update(float windPower, float elapsed) { // If this is first time - create array with new Data structure objects if (mFirstTime) { for (int i=0; i < mMaxObjects; ++i) { mData.push_back(Data()); processDrops(windPower, i); } mFirstTime = false; } for (int i=0; i < mMaxObjects; i++) { // Sleep until uptime > 0 (To make drops fall with randomly timeout) if (mData[i].timeout > 0) { mData[i].timeout--; } else { // Find new x/y positions mData[i].x += mData[i].x_speed * elapsed; mData[i].y += mData[i].y_speed * elapsed; // Find new speeds mData[i].x_speed += windPower * elapsed; mData[i].y_speed += Gravity * elapsed; // Drawing here ... // If drop has been falled out of the screen if (mData[i].y > mWindowHeight) processDrops(windPower, i); } } } So the main idea is: I have some structure which consist of drop position, speed. I have a function for processing drops at some index in the vector-array. Now if that's first time of running I'm making array with max size and process it in cycle. But this code works slower that all another I have. Please, help me to optimize it. I tried to replace all int with uint16_t but I think it doesn't matter.

    Read the article

  • why resubmit after refresh php page

    - by user2719452
    why resubmit after refresh php page? try it, go to: http://qass.im/message-envelope/ and upload any image now try click F5, after refresh page "resubmit" Why? I don't want resubmit after refresh page What is the solution? See this is my form code: <form id="uploadedfile" name="uploadedfile" enctype="multipart/form-data" action="upload.php" method="POST"> <input name="uploadedfile" type="file" /> <input type="submit" value="upload" /> </form> See this is php code upload.php file: <?php $allowedExts = array("gif", "jpeg", "jpg", "png", "zip", "pdf", "docx", "rar", "txt", "doc"); $temp = explode(".", $_FILES["uploadedfile"]["name"]); $extension = end($temp); $newname = $extension.'_'.substr(str_shuffle(str_repeat("0123456789abcdefghijklmnopqrstuvwxyz", 7)), 4, 7); $imglink = 'attachment/attachment_file_'; $uploaded = $imglink .$newname.'.'.$extension; if ((($_FILES["uploadedfile"]["type"] == "image/jpeg") || ($_FILES["uploadedfile"]["type"] == "image/jpeg") || ($_FILES["uploadedfile"]["type"] == "image/jpg") || ($_FILES["uploadedfile"]["type"] == "image/pjpeg") || ($_FILES["uploadedfile"]["type"] == "image/x-png") || ($_FILES["uploadedfile"]["type"] == "image/gif") || ($_FILES["uploadedfile"]["type"] == "image/png") || ($_FILES["uploadedfile"]["type"] == "application/msword") || ($_FILES["uploadedfile"]["type"] == "text/plain") || ($_FILES["uploadedfile"]["type"] == "application/vnd.openxmlformats-officedocument.wordprocessingml.document") || ($_FILES["uploadedfile"]["type"] == "application/pdf") || ($_FILES["uploadedfile"]["type"] == "application/x-rar-compressed") || ($_FILES["uploadedfile"]["type"] == "application/x-zip-compressed") || ($_FILES["uploadedfile"]["type"] == "application/zip") || ($_FILES["uploadedfile"]["type"] == "multipart/x-zip") || ($_FILES["uploadedfile"]["type"] == "application/x-compressed") || ($_FILES["uploadedfile"]["type"] == "application/octet-stream")) && ($_FILES["uploadedfile"]["size"] < 5242880) // Max size is 5MB && in_array($extension, $allowedExts)) { move_uploaded_file($_FILES["uploadedfile"]["tmp_name"], $uploaded ); echo '<a target="_blank" href="'.$uploaded.'">click</a>'; // If has been uploaded file echo '<h3>'.$uploaded.'</h3>'; } if($_FILES["uploadedfile"]["error"] > 0){ echo '<h3>Please choose file to upload it!</h3>'; // If you don't choose file } elseif(!in_array($extension, $allowedExts)){ echo '<h3>This extension is not allowed!</h3>'; // If you choose file not allowed } elseif($_FILES["uploadedfile"]["size"] > 5242880){ echo "Big size!"; // If you choose big file } ?> if you have solution, please edit my php code and paste your solution code! Thanks.

    Read the article

  • pagination - 10 pages per page

    - by arthur
    I have a pagination script that displays a list of all pages like so: prev [1][2][3][4][5][6][7][8][9][10][11][12][13][14] next But I would like to only show ten of the numbers at a time: prev [3][4][5][6][7][8][9][10][11][12] next How can I accomplish this? Here is my code so far: <?php /* Set current, prev and next page */ $page = (!isset($_GET['page']))? 1 : $_GET['page']; $prev = ($page - 1); $next = ($page + 1); /* Max results per page */ $max_results = 2; /* Calculate the offset */ $from = (($page * $max_results) - $max_results); /* Query the db for total results. You need to edit the sql to fit your needs */ $result = mysql_query("select title from topics"); $total_results = mysql_num_rows($result); $total_pages = ceil($total_results / $max_results); $pagination = ''; /* Create a PREV link if there is one */ if($page > 1) { $pagination .= '< a hr_ef="?page='.$prev.'">Previous</a> '; } /* Loop through the total pages */ for($i = 1; $i <= $total_pages; $i++) { if(($page) == $i) { $pagination .= $i; } else { $pagination .= '< a hr_ef="index.php?page='.$i.'">'.$i.'</a>'; } } /* Print NEXT link if there is one */ if($page < $total_pages) { $pagination .= '< a hr_ef="?page='.$next.'"> Next</a>'; } /* Now we have our pagination links in a variable($pagination) ready to print to the page. I pu it in a variable because you may want to show them at the top and bottom of the page */ /* Below is how you query the db for ONLY the results for the current page */ $result=mysql_query("select * from topics LIMIT $from, $max_results "); while ($i = mysql_fetch_array($result)) { echo $i['title'].'<br />'; } echo $pagination; ?>

    Read the article

  • remove/ignore float from outer div

    - by acidzombie24
    This may sound weird but i have some css which aligns mys divs. In one place i also use http://www.brunildo.org/test/img_center.html which centers images. Now i want my divs inside a larger div to go to another line if this one gets full. float: left seems to be the answer. The problem is it ruins my formatting. Including solution in the above link. I have this test code. If i remove the width and float it looks fine except it may take up too much space and not go to another line. I was thinking i could use float on an outerdiv and center the image within. However float: left is still breaking it. I am hoping there is a way to remove the float so each div does go left but the div inside centers correctly not breaking my formatting. <style type="text/css"> .wraptocenter { display: table-cell; text-align: center; vertical-align: middle; width: 200px; height: 200px; background: blue; } .wraptocenter * { vertical-align: middle; } /*\*//*/ .wraptocenter { display: block; } .wraptocenter span { display: inline-block; height: 100%; width: 1px; } /**/ div.c { background: red; overflow: hidden; min-width: 400px; max-width: 400px; } div.c div { float: left; } </style> <!--[if lt IE 8]><style> .wraptocenter span { display: inline-block; height: 100%; } </style><![endif]--> <div class="c"> <div> <div> <div class="wraptocenter"><span></span><img src="a.jpg" alt="/a.jpg"></div> <div class="wraptocenter"><span></span><img src="a.jpg" alt="/a.jpg"></div> <div class="wraptocenter"><span></span><img src="a.jpg" alt="/a.jpg"></div> </div></div></div>

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • jQuery slider onChange auto submit form

    - by bikey77
    I'm using a jQuery slider as a price range selector in a form. I'd like to have the form submit automatically when one of the values has been changed. I used a couple of examples I found on SO but they didn't work with my code. <form action="itemlist.php" method="post" enctype="application/x-www-form-urlencoded" name="priceform" id="priceform" target="_self"> <div id="slider-holder"> Prices: From <span id="pricefromlabel">100 &#8364;</span> To <span id="pricetolabel">500 &#8364;</span> <input type="hidden" id="pricefrom" name="pricefrom" value="100" /> <input type="hidden" id="priceto" name="priceto" value="500" /> <div id="slider-range"></div> <input name="Search" type="submit" value="Search" /> </div> </form> This is the code that displays the values of the slider and updates 2 hidden form fields I use to store the prices in order to submit: <script> $(function() { $("#slider-range" ).slider({ range: true, min: 0, max: 1000, values: [ <?=$minprice?>, <?=$maxprice?> ], start: function (event, ui) { event.stopPropagation(); }, slide: function( event, ui ) { $( "#pricefrom" ).val(ui.values[0]); $( "#priceto" ).val(ui.values[1]); $( "#pricefromlabel" ).html(ui.values[0] + ' &euro;'); $( "#pricetolabel" ).html(ui.values[1] + ' &euro;'); } }); return false; }); </script> I've tried adding this code as well as an data-autosubmit="true" attribute to the div but no result. $(function() { $('[data-autosubmit="true"]').change(function() { parentForm = $(this).('#priceform'); clearTimeout(submitTimeout); submitTimeout = setTimeout(function() { parentForm.submit() }, 100); }); I've also tried adding a $.post() event to the slider but I'm not very good with jQuery so I'm probably doing it wrong. Any help will be appreciated.

    Read the article

  • Is throwing an exception a healthy way to exit?

    - by ramaseshan
    I have a setup that looks like this. class Checker { // member data Results m_results; // see below public: bool Check(); private: bool Check1(); bool Check2(); // .. so on }; Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below) bool Checker::Check() { // initilisations etc. Check1(); Check2(); // ... so on } A typical Check function would look like this: bool Checker::Check1() { double result; // lots of code m_results.SetCheck1Result(result); } And the results class looks something like this: class Results { double m_check1Result; double m_check2Result; // ... public: void SetCheck1Result(double d); double GetOverallResult() { return max(m_check1Result, m_check2Result, ...); } }; Note: all code is oversimplified. The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either: Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check(). The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc. The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding? Is there a better 3rd option?

    Read the article

  • Socket Programming : Inputstream Stuck in loop - read() always return 0

    - by Atom Skaa ska Hic
    Server side code public static boolean sendFile() { int start = Integer.parseInt(startAndEnd[0]) - 1; int end = Integer.parseInt(startAndEnd[1]) - 1; int size = (end - start) + 1; try { bos = new BufferedOutputStream(initSocket.getOutputStream()); bos.write(byteArr,start,size); bos.flush(); bos.close(); initSocket.close(); System.out.println("Send file to : " + initSocket); } catch (IOException e) { System.out.println(e.getLocalizedMessage()); disconnected(); return false; } return true; } Client Side public boolean receiveFile() { int current = 0; try { int bytesRead = bis.read(byteArr,0,byteArr.length); System.out.println("Receive file from : " + client); current = bytesRead; do { bytesRead = bis.read(byteArr, current, (byteArr.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead != -1); bis.close(); bos.write(byteArr, 0 , current); bos.flush(); bos.close(); } catch (IOException e) { System.out.println(e.getLocalizedMessage()); disconnected(); return false; } return true; } Client side is multithreading,server side not use multithreading. I just paste some code that made problem if you want see all code please tell me. After I debug the code, I found that if I set max thread to any and then the first thread always stuck in this loop. That bis.read(....) always return 0. Although, server had close stream and it not get out of the loop. I don't know why ... But another threads are work correctly. do { bytesRead = bis.read(byteArr, current, (byteArr.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead != -1);

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • Efficient method of finding database rows that have *one or more* qualities from a list of seven qualities

    - by hithere
    Hello! For this question, I'm looking to see if anyone has a better idea of how to implement what I'm currently planning on implementing (below): I'm keeping track of a set of images, using a database. Each image is represented by one row. I want to be able to search for images, using a number of different search parameters. One of these parameters involves a search-by-color option. (The rest of the search stuff is currently working fine.) Images in this database can contain up to seven colors: -Red -Orange -Yellow -Green -Blue -Indigo -Violet Here are some example user queries: "I want an image that contains red." "I want an image that contains red and blue." "I want an image that contains yellow and violet." "I want an image that contains red, orange, yellow, green, blue, indigo and violet." And so on. Users make this selection through the use of checkboxes in an html form. They can check zero checkboxes, all seven, and anything in between. I'm curious to hear what people think would be the most efficient way to perform this database search. I have two possible options right now, but I feel like there must be something better that I'm not thinking of. (Option 1) -For each row, simply have seven additional fields in the database, one for each color. Each field holds a 1 or 0 (true/false) value, and I SELECT based on whatever the user has checked off. (I didn't like this solution so much, because it seemed kind of wasteful to add seven additional fields...especially since most pictures in this table will only have 3-4 colors max, though some could have up to 7. So that means I'm storing a lot of zeros.) Also, if I added more searchable colors later on (which I don't think I will, but it's always possible), I'd have to add more fields. (Option 2) -For each image row, I could have a "colors" text field that stores space-separated color names (or numbers for the sake of compactness). Then I could do a fulltext match against search through the fields, selecting rows that contain "red yellow green" (or "1 3 4"). But I kind of didn't want to do fulltext searching because I already allow a keyword search, and I didn't really want to do two fulltext searches per image search. Plus, if the database gets big, fulltext stuff might slow down. Any better options that I didn't think of? Thanks! Side Note: I'm using PHP to work with a MySQL database.

    Read the article

  • Java Sorting "queue" list based on DateTime and Z Position (part of school project)

    - by Kuchinawa
    For a school project i have a list of 50k containers that arrive on a boat. These containers need to be sorted in a list in such a way that the earliest departure DateTimes are at the top and the containers above those above them. This list then gets used for a crane that picks them up in order. I started out with 2 Collection.sort() methods: 1st one to get them in the right XYZ order Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return positionSort(contData1.getLocation(),contData2.getLocation()); } }); Then another one to reorder the dates while keeping the position in mind: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { int c = contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); int p = positionSort2(contData1.getLocation(), contData2.getLocation()); if(p != 0) c = p; return c; } }); But i never got this method to work.. What i got working now is rather quick and dirty and takes a long time to process (50seconds for all 50k): First a sort on DateTime: Collections.sort(containers, new Comparator<ContainerData>() { @Override public int compare(ContainerData contData1, ContainerData contData2) { return contData1.getLeaveDateTimeFrom().compareTo(contData2.getLeaveDateTimeFrom()); } }); Then a correction function that bumps top containers up: containers = stackCorrection(containers); private static List<ContainerData> stackCorrection(List<ContainerData> sortedContainerList) { for(int i = 0; i < sortedContainerList.size(); i++) { ContainerData current = sortedContainerList.get(i); // 5 = Max Stack (0 index) if(current.getLocation().getZ() < 5) { //Loop through possible containers above current for(int j = 5; j > current.getLocation().getZ(); --j) { //Search for container above for(int k = i + 1; k < sortedContainerList.size(); ++k) if(sortedContainerList.get(k).getLocation().getX() == current.getLocation().getX()) { if(sortedContainerList.get(k).getLocation().getY() == current.getLocation().getY()) { if(sortedContainerList.get(k).getLocation().getZ() == j) { //Found -> move container above current sortedContainerList.add(i, sortedContainerList.remove(k)); k = sortedContainerList.size(); i++; } } } } } } return sortedContainerList; } I would like to implement this in a better/faster way. So any hints are appreciated. :)

    Read the article

  • How can I scale movement physics functions to frames per second (in a game engine)?

    - by Richard
    I am working on a game in Javascript (HTML5 Canvas). I implemented a simple algorithm that allows an object to follow another object with basic physics mixed in (a force vector to drive the object in the right direction, and the velocity stacks momentum, but is slowed by a constant drag force). At the moment, I set it up as a rectangle following the mouse (x, y) coordinates. Here's the code: // rectangle x, y position var x = 400; // starting x position var y = 250; // starting y position var FPS = 60; // frames per second of the screen // physics variables: var velX = 0; // initial velocity at 0 (not moving) var velY = 0; // not moving var drag = 0.92; // drag force reduces velocity by 8% per frame var force = 0.35; // overall force applied to move the rectangle var angle = 0; // angle in which to move // called every frame (at 60 frames per second): function update(){ // calculate distance between mouse and rectangle var dx = mouseX - x; var dy = mouseY - y; // calculate angle between mouse and rectangle var angle = Math.atan(dy/dx); if(dx < 0) angle += Math.PI; else if(dy < 0) angle += 2*Math.PI; // calculate the force (on or off, depending on user input) var curForce; if(keys[32]) // SPACE bar curForce = force; // if pressed, use 0.35 as force else curForce = 0; // otherwise, force is 0 // increment velocty by the force, and scaled by drag for x and y velX += curForce * Math.cos(angle); velX *= drag; velY += curForce * Math.sin(angle); velY *= drag; // update x and y by their velocities x += velX; y += velY; And that works fine at 60 frames per second. Now, the tricky part: my question is, if I change this to a different framerate (say, 30 FPS), how can I modify the force and drag values to keep the movement constant? That is, right now my rectangle (whose position is dictated by the x and y variables) moves at a maximum speed of about 4 pixels per second, and accelerates to its max speed in about 1 second. BUT, if I change the framerate, it moves slower (e.g. 30 FPS accelerates to only 2 pixels per frame). So, how can I create an equation that takes FPS (frames per second) as input, and spits out correct "drag" and "force" values that will behave the same way in real time? I know it's a heavy question, but perhaps somebody with game design experience, or knowledge of programming physics can help. Thank you for your efforts. jsFiddle: http://jsfiddle.net/BadDB

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • Thursday Community Keynote: "By the Community, For the Community"

    - by Janice J. Heiss
    Sharat Chander, JavaOne Community Chairperson, began Thursday's Community Keynote. As part of the morning’s theme of "By the Community, For the Community," Chander noted that 60% of the material at the 2012 JavaOne conference was presented by Java Community members. "So next year, when the call for papers starts, put-in your submissions," he urged.From there, Gary Frost, Principal Member of Technical Staff, AMD, expanded upon Sunday's Strategy Keynote exploration of Project Sumatra, an OpenJDK project targeted at bringing Java to heterogeneous computing platforms (which combine the CPU and the parallel processor of the GPU into a single piece of silicon). Sumatra entails enhancing the JVM to make maximum use of these advanced platforms. Within this development space, AMD created the Aparapi API, which converts Java bytecode into OpenCL for execution on such GPU devices. The Aparapi API was open sourced in September 2011.Whether it was zooming-in on a Mandelbrot set, "the game of life," or a swarm of 10,000 Dukes in a space-bound gravitational dance, Frost's demos, using an Aparapi/OpenCL implementation, produced stunningly faster display results. He indicated that the Java 9 timeframe is where they see Project Sumatra coming to ultimate fruition, employing the Lamdas of Java 8.Returning to the theme of the keynote, Donald Smith, Director, Java Product Management, Oracle, explored a mind map graphic demonstrating the importance of Community in terms of fostering innovation. "It's the sharing and mixing of culture, the diversity, and the rapid prototyping," he said. Within this topic, Smith, brought up a panel of representatives from Cloudera, Eclipse, Eucalyptus, Perrone Robotics, and Twitter--ideal manifestations of community and innovation in the world of Java.Marten Mickos, CEO, Eucalyptus Systems, explored his company's open source cloud software platform, written in Java, and used by gaming companies, technology companies, media companies, and more. Chris Aniszczyk, Operations Engineering,Twitter, noted the importance of the JVM in terms of their multiple-language development environment. Mike Olson, CEO, Cloudera, described his company's Apache Hadoop-based software, support, and training. Mike Milinkovich, Executive Director, Eclipse Foundation, noted that they have about 270 tools projects at Eclipse, with 267 of them written in Java. Milinkovich added that Eclipse will even be going into space in 2013, as part of the control software on various experiments aboard the International Space Station. Lastly, Paul Perrone, CEO, Perrone Robotics, detailed his company's robotics and automation software platform built 100% on Java, including Java SE and Java ME--"on rat, to cat, to elephant-sized systems." Milinkovic noted that communities are by nature so good at innovation because of their very openness--"The more open you make your innovation process, the more ideas are challenged, and the more developers are focused on justifying their choices all the way through the process."From there, Georges Saab, VP Development Java SE OpenJDK, continued the topic of innovation and helping the Java Community to "Make the Future Java." Martijn Verburg, representing the London Java Community (winner of a Duke's Choice Award 2012 for their activity in OpenJDK and JCP), soon joined Saab onstage. Verburg detailed the LJC's "Adopt a JSR" program--"to get day-to-day developers more involved in the innovation that's happening around them."  From its London launching pad, the innovative program has spread to Brazil, Morocco, Latvia, India, and more.Other active participants in the program joined Verburg onstage--Ben Evans, London Java Community; James Gough, Stackthread; Bruno Souza, SOUJava; Richard Warburton, jClarity; and Cecelia Borg, Oracle--OpenJDK Onboarding. Together, the group explored the goals and tasks inherent in the Adopt a JSR program--from organizing hack days (testing prototype implementations), to managing mailing lists and forums, to triaging issues, to evangelism—all with the goal of fostering greater community/developer involvement, but equally importantly, building better open standards. “Come join us, and make your ecosystem better!" urged Verburg.Paul Perrone returned to profile the latest in his company's robotics work around Java--including the AARDBOTS family of smaller robotic vehicles, running the Perrone MAX platform on top of the Java JVM. Perrone took his "Rumbles" four-wheeled robot out for a spin onstage--a roaming, ARM-based security-bot vehicle, complete with IR, ultrasonic, and "cliff" sensors (the latter, for the raised stage at JavaOne). As an ultimate window into the future of robotics, Perrone displayed a "head-set" controller--a sensor directed at the forehead to monitor brainwaves, for the someday-implementation of brain-to-robot control.Then, just when it seemed this might be the end of the day's futuristic offerings, a mystery voice from offstage pronounced "I've got some toys"--proving to be guest-visitor James Gosling, there to explore his cutting-edge work with Liquid Robotics. While most think of robots as something with wheels or arms or lasers, Gosling explained, the Liquid Robotics vehicle is an entirely new and innovative ocean-going 'bot. Looking like a floating surfboard, with an attached set of underwater wings, the autonomous devices roam the oceans using only the energy of ocean waves to propel them, and a single actuated rudder to steer. "We have to accomplish all guidance just by wiggling the rudder," Gosling said. The devices offer applications from self-installing weather buoy, to pollution monitoring station, to marine mammal monitoring device, to climate change data gathering, to even ocean life genomic sampling. The early versions of the vehicle used C code on very tiny industrial micro controllers, where they had to "count the bytes one at a time."  But the latest generation vehicles, which just hit the water a week or so ago, employ an ARM processor running Linux and the ARM version of JDK 7. Gosling explained that vehicle communication from remote locations is achieved via the Iridium satellite network. But because of the costs of this communication path, the data must be sent in very small bursts--using SBD short burst data. "It costs $1/kb, so that rules everything in the software design,” said Gosling. “If you were trying to stream a Netflix video over this, it would cost a million dollars a movie. …We don't have a 'big data' problem," he quipped. There are currently about 150 Liquid Robotics vehicles out traversing the oceans. Gosling demonstrated real time satellite tracking of several vehicles currently at sea, noting that Java is actually particularly good at AI applications--due to the language having garbage collection, which facilitates complex data structures. To close-out his time onstage, Gosling of course participated in the ceremonial Java tee-shirt toss out to the audience…In parting, Chander passed the JavaOne Community Chairperson baton to Stephen Chin, Java Technology Evangelist, Oracle. Onstage in full motorcycle gear, Chin noted that he'll soon be touring Europe by motorcycle, meeting Java Community Members and streaming live via UStream--the ultimate manifestation of community and technology!  He also reminded attendees of the upcoming JavaOne Latin America 2012, São Paulo, Brazil (December 4-6, 2012), and stated that the CFP (call for papers) at the conference has been extended for one more week. "Remember, December is summer in Brazil!" Chin said.

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >