Search Results

Search found 3042 results on 122 pages for 'archive'.

Page 41/122 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Downloading Emails locally with Thunderbird

    - by r_honey
    I am using Gmail (web interface) for sometime now, and have well over 20 labels and some thousand mails there archived to different labels in Gmail. Now I want to have a local copy of all my mails and following points are important in the context: The Primary mail access mechanism would continue to be Gmail web for me. I just want a backup of my mail account locally. Ideally the mails should download locally in folders named after Gmail labels (I know this is possible via IMAP but probably not by POP) After all my mails are available locally, I will delete most of them in Gmail to free up space and because I want to archive them. The mails should continue to exist locally and should not be deleted when I delete when from Gmail web interface. I would be syncing my gmail account locally let's say every month. So, the new mails that I have sent/received during this period should come over to my local mailbox in the folders named after Gmail labels. I do understand that Gmail maintains a single copy of email having 2 different labels and such email would get duplicated locally in the 2 folders and I am okay with that. Essentially you can see I just want to archive my mails from the Gmail server to a local backup and then sync (one way from Gmail to locally) new mails at regular intervals. For some points above, POP seems to be the option while IMAP seems for the others. I am really confused and need help in deciding which of POP or IMAP would suit me best. I have currently chosen Thunderbird to be my local email client but would not have a problem switching to Outlook or anything else as long as I get my desired archiving functionality.

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • boost::serialization of mutual pointers

    - by KneLL
    First, please take a look at these code: class Key; class Door; class Key { public: int id; Door *pDoor; Key() : id(0), pDoor(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pDoor); } }; class Door { public: int id; Key *pKey; Door() : id(0), pKey(NULL) {} private: friend class boost::serialization::access; template <typename A> void serialize(A &ar, const unsigned int ver) { ar & BOOST_SERIALIZATION_NVP(id) & BOOST_SERIALIZATION_NVP(pKey); } }; BOOST_CLASS_TRACKING(Key, track_selectively); BOOST_CLASS_TRACKING(Door, track_selectively); int main() { Key k1, k_in; Door d1, d_in; k1.id = 1; d1.id = 2; k1.pDoor = &d1; d1.pKey = &k1; // Save data { wofstream f1("test.xml"); boost::archive::xml_woarchive ar1(f1); // !!!!! (1) const Key *pK = &k1; const Door *pD = &d1; ar1 << BOOST_SERIALIZATION_NVP(pK) << BOOST_SERIALIZATION_NVP(pD); } // Load data { wifstream i1("test.xml"); boost::archive::xml_wiarchive ar1(i1); // !!!!! (2) A *pK = &k_in; B *pD = &d_in; // (2.1) //ar1 >> BOOST_SERIALIZATION_NVP(k_in) >> BOOST_SERIALIZATION_NVP(d_in); // (2.2) ar1 >> BOOST_SERIALIZATION_NVP(pK) >> BOOST_SERIALIZATION_NVP(pD); } } The first (1) is a simple question - is it possible to pass objects to archive without pointers? If simply pass objects 'as is' that boost throws exception about duplicated pointers. But I'm confused of creating pointers to save objects. The second (2) is a real trouble. If comment out string after (2.1) then boost will corectly load a first Key object (and init internal Door pointer pDoor), but will not init a second Door (d_in) object. After this I have an inited *k_in* object with valid pointer to Door and empty *d_in* object. If use string (2.2) then boost will create two Key and Door objects somewhere in memory and save addresses in pointers. But I want to have two objects *k_in* and *d_in*. So, if I copy a values of memory objects to local variables then I store only addresses, for example, I can write code after (2.2): d_in.id = pD->id; d_in.pKey = pD->pKey; But in this case I store only a pointer and memory object remains in memory and I cannot delete it, because *d_in.pKey* will be unvalid. And I cannot perform a deep copy with operator=(), because if I write code like this: Key &operator==(const Key &k) { if (this != &k) { id = k.id; // call to Door::operator=() that calls *pKey = *d.pKey and so on *pDoor = *k.pDoor; } return *this; } then I will get a something like recursion of operator=()s of Key and Door. How to implement proper serialization of such pointers?

    Read the article

  • Wordpress > Custom wp-archives list by year and category with post count in sidebar

    - by David
    So I have been trying to add a custom sidebar archives section to this Wordpress Theme. I am trying to get two separate yearly archive sections, one for category A and one for category B. I need to get a post count and display it as well, to the left of "articles". This is the format I have been trying to get in the sidebar: Category 1 2010 - 5 articles 2009 - 4 articles 2008 - 6 articles 2007 - 5 articles Category 2 2010 - 8 articles 2009 - 3 articles 2008 - 7 articles 2007 - 5 articles This code is pulling the yearly archive, but the links to the yearly archives do not filter by category. However, if Category 1 does not have posts in 2008, 2008 would not show. So I feel like I am really close, but there is something wrong here in the code. I have also been unsuccessful in pulling the post count for the year/category either. Here is what I get: Category 1 2010 - articles (these links all take you to the general yearly archive page, not category specific) 2009 - articles 2007 - articles Category 2 2010 - articles 2009 - articles 2008 - articles 2007 - articles Here is are the functions I am using in my functions.php file, any idea what I am doing wrong? <?php function company_below_sidebar() { global $cat; ?> <div id="press"> <h2>Press</h2> <ul><?php $cat = '1'; wp_get_archives('type=yearly') ?></ul> </div> <div id="news"> <h2>News</h2> <ul><?php $cat = '4'; wp_get_archives('type=yearly') ?></ul> </div> <?php } add_action('thematic_betweenmainasides', 'company_below_sidebar'); function customarchives_join( $x ) { global $wpdb; return $x . " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb- >term_relationships.object_id) INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; } function customarchives_where( $x ) { global $wpdb; global $cat; return $x . " AND $wpdb->term_taxonomy.taxonomy = 'category' AND $wpdb->term_taxonomy.term_id IN ($cat)"; } add_filter( 'getarchives_where', 'customarchives_where' ); add_filter( 'getarchives_join', 'customarchives_join' ); function company_monthly_archive_flexible($input) { // Get URL from $input preg_match('(((f|ht){1}tp://)[-a-zA-Z0-9@:%_\+.~#?&;//=]+)', $input, $matches); $url = $matches[0]; // Get content from $input (without the tags) $content = trim(strip_tags($input)); // Seperate each date element and put it in an array $dates = explode(" ", $content); // Get year $year = $dates[0]; // Customize output format $format = "<li><a href='$url'><b>$year -</b> articles</a></li>\n"; echo $format; } add_filter('get_archives_link','company_monthly_archive_flexible'); ?>

    Read the article

  • css problem lining up some icons...

    - by Ronedog
    I'm having a problem lining up some icons and am new enough to css that I'm not exactly sure how to explain this. So I've attached a picture of what the out put is rendering like. I've also included what the css and html code is. Hopefully someone can help point me in the right direction to help fix this. I want the "edit", "archive", "delete" icons to all line up in the right side exactly the same as the first row in the picture. Here is the html: <ul id="nav"> <li>California <div class="portf_edit"> <span> <img src="../images/edit.png"> </span> </div> <div class="portf_archive"> <span> <img src="../images/archive.png"> </span> </div> <div class="portf_delete"> <span> <img src="../images/delete.png"> </span> </div> </li> <li>Hyrum <div class="portf_edit"> <span> <img src="../images/edit.png"> </span> </div> <div class="portf_archive"> <span> <img src="../images/archive.png"> </span> </div> <div class="portf_delete"> <span> <img src="../images/delete.png"> </span> </div> </li> Here is the css: li { list-style-type:none; vertical-align: bottom; list-style-image: none; left:0px; text-align:left; } ul { list-style-type: none; vertical-align: bottom; list-style-image: none; left:0px; } ul#nav{ margin-left:0; padding-left:0px; text-indent:15px; } .portf_edit{ float:right; position: relative; right:50px; display:block; } .portf_archive{ float:right; position: relative; right:-5px; display:block; } .portf_delete{ float:right; position: relative; right: -60px; display:block; } Here's what is being output: Any ideas where to start? Thanks.

    Read the article

  • Force close message when preferences are called via menu button

    - by Dan T
    I see no problem in the code. Help? preferences.xml <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"> <ListPreference android:title="Gender" android:summary="Are you male or female?" android:key="genderPref" android:defaultValue="male" android:entries="@array/genderArray" android:entryValues="@array/genderValues" /> <ListPreference android:title="Weight" android:summary="How much do you weigh?" android:key="weightPref" android:defaultValue="180" android:entries="@array/weightArray" android:entryValues="@array/weightValues" /> </PreferenceScreen> arrays.xml <?xml version="1.0" encoding="utf-8"?> <resources> <string-array name="genderArray"> <item>Male</item> <item>Female</item> </string-array> <string-array name="genderValues"> <item>male</item> <item>female</item> </string-array> <string-array name="weightArray"> <item>120</item> <item>150</item> <item>180</item> <item>210</item> <item>240</item> <item>270</item> </string-array> <string-array name="weightValues"> <item>120</item> <item>150</item> <item>180</item> <item>210</item> <item>240</item> <item>270</item> </string-array> </resources> Preferences.java: package com.dantoth.drinkingbuddy; import android.os.Bundle; import android.preference.PreferenceActivity; public class Preferences extends PreferenceActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); }; } butts.xml (idk why it's butts but I've gotten used to it now. really just sets up the menu button) <?xml version="1.0" encoding="utf-8"?> <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/settings" android:title="Settings" android:icon="@drawable/ic_menu_settings" /> <item android:id="@+id/archive" android:title="Archive" android:icon="@drawable/ic_menu_archive" /> <item android:id="@+id/new_session" android:title="New Session" android:icon="@drawable/ic_menu_new" /> <item android:id="@+id/about" android:title="About" android:icon="@drawable/ic_menu_about" /> </menu> within DrinkingBuddy.java: @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case R.id.settings: startActivity(new Intent(this, Preferences.class)); return true; case R.id.archive: Toast.makeText(this, "Expect to see your old drinking sessions here.", Toast.LENGTH_LONG).show(); return true; //ETC. } return false; That's it. I can press the menu button on the phone and see the menu items I created, but when I click on the "Settings" (r.id.settings) it FC. Do I have to do anything to the manifest/other thing to get this to work??

    Read the article

  • Issue in Creating an Insert Query See Description Below...

    - by Parth
    I am creating a Insert Query using PHP.. By fetching the data from a Audit table and iterating the values of it in loops.. table from which I am fetching the value has the snapshot below: The Code I am using to create is given below: mysql_select_db('information_schema'); $select = mysql_query("SELECT TABLE_NAME FROM TABLES WHERE TABLE_SCHEMA = 'pranav_test'"); $selectclumn = mysql_query("SELECT * FROM COLUMNS WHERE TABLE_SCHEMA = 'pranav_test'"); mysql_select_db('pranav_test'); $seletaudit = mysql_query("SELECT * FROM jos_audittrail WHERE live = 0"); $tables = array(); $i = 0; while($row = mysql_fetch_array($select)) { $tables[$i++] =$row['TABLE_NAME']; } while($row2 = mysql_fetch_array($seletaudit)) { $audit[] =$row2; } foreach($audit as $val) { if($val['operation'] == "INSERT") { if(in_array($val['table_name'],$tables)) { $insert = "INSERT INTO '".$val['table_name']."' ("; $selfld = mysql_query("SELECT field FROM jos_audittrail WHERE table_name = '".$val['table_name']."' AND operation = 'INSERT' AND trackid = '".$val['trackid']."'"); while($row3 = mysql_fetch_array($selfld)) { $values[] = $row3; } foreach($values as $field) { $insert .= "'".$field['field']."', "; } $insert .= "]"; $insert = str_replace(", ]",")",$insert); $insert .= " values ("; $selval = mysql_query("SELECT newvalue FROM jos_audittrail WHERE table_name = '".$val['table_name']."' AND operation = 'INSERT' AND trackid = '".$val['trackid']."' AND live = 0"); while($row4 = mysql_fetch_array($selval)) { $value[] = $row4; } /*echo "<pre>"; print_r($value);exit;*/ foreach($value as $data) { $insert .= "'".$data['newvalue']."', "; } $insert .= "["; $insert = str_replace(", [",")",$insert); } } } When I Echo the $insert out of the most outer for loop (for auditrail) The values get printed as many times as the records are found for the outer for loop..i.e 'orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component' gets repeated , i.e. INSERT INTO 'jos_menu' ('params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type', 'params', 'checked_out_time', 'ordering', 'componentid', 'published', 'id', 'menutype', 'name', 'alias', 'link', 'type') values ('orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component', 'orderby= show_noauth= show_title= link_titles= show_intro= show_section= link_section= show_category= link_category= show_author= show_create_date= show_modify_date= show_item_navigation= show_readmore= show_vote= show_icons= show_pdf_icon= show_print_icon= show_email_icon= show_hits= feed_summary= page_title= show_page_title=1 pageclass_sfx= menu_image=-1 secure=0 ', '0000-00-00 00:00:00', '13', '20', '1', '152', 'accmenu', 'IPL', 'ipl', 'index.php?option=com_content&view=archive', 'component', 'orderby= show_noauth= .. .. .. .. and so on What I want is I should get these Values for once, I know there is mistake using the outer Forloop, but I m not getting the idea of rectifying it.. Please help... please poke me for more clarification...

    Read the article

  • Error when opening .tar.gz via Shell to install Apache Maven

    - by adamsquared
    Thank you in advance for the help. My Goal: To install apache maven per its websites instructions (http://maven.apache.org/download.html), in order to install the JUNG package according to its install instructions (http://sourceforge.net/apps/trac/jung/wiki/JUNGManual), so I can use the JUNG classes in various Java GUIs. The Problem: I get an error message when I try to extract the apache-maven .gz (install?) file in shell. Background: I'm trying to install the JUNG (http://jung.sourceforge.net/index.html) package to my system's Java, so I can write object-oriented code using various GUIs (Ecliplse, Dr. Java) using the classes in JUNG. I don't understand how the building/installing process works, and how I can get what I build/install to work on various GUIs and the command line. I'm new to shell and the command line, and mostly have experience using a simple IDE (DrJava, Python IDLE, R GUI) to write and compile object-oriented code. Machine: Mac OSX 10.5.8 32-bit. The Instructions: For the maven building Extract the distribution archive, i.e. apache-maven-3.0.4-bin.tar.gz to the directory you wish to install Maven 3.0.4. These instructions assume you chose /usr/local/apache-maven. The subdirectory apache-maven-3.0.4 will be created from the archive. ... for the JUNG installation Appendix: How to Build JUNG This is a brief intro to building JUNG jars with maven2 (the build system that JUNG currently uses). First, ensure that you have a JDK of at least version 1.5: JUNG 2.0+ requires Java 1.5+. Ensure that your JAVA_HOME variable is set to the location of the JDK. On a Windows platform, you may have a separate JRE (Java Runtime Environment) and JDK (Java Development Kit). The JRE has no capability to compile Java source files, so you must have a JDK installed. If your JAVA_HOME variable is set to the location of the JRE, and not the location of the JDK, you will be unable to compile. Get Maven Download and install maven2 from maven.apache.org: http://maven.apache.org/download.html At time of writing (early December 2009), the latest version was maven-2.2.1. Install the downloaded maven2 (there are installation instructions on the Maven website). Follow the installation instructions and confirm a successful installation by typing 'mvn --version' in a command terminal window. Get JUNG ... What I Did: I downloaded the file apache-maven-2.2.1-bin.tar.gz. The JUNG website specified to use apache maven 2. I wanted to stick to the recommended installation instructions, but I couldn't get to /usr on my GUI (i've noticed you click on the MacHD symbol on the desktop its missing several directories/folders that you can see using the shell using the ls command at root directory I couldn't find a way to access the file using my mac GUI. Therefore, I used the shell to navigate to the root directory and then to /usr/local, and used the mkdir command to make the directory apache-maven and entered it. I then moved the file using the mv command. Next I tried extracting the file using tar -zxvf apache-maven-2.2.1-bin.tar.gz. The Error Message: tar: apache-maven-2.2.1/direcoryandfile: Cannot open: No such file or directory ... apache-maven-2.2.1/lib/ext: Cannot mkdir: No such file or directory apache-maven-2.2.1/lib/ext/README.txt tar: apache-maven-2.2.1/lib/ext/README.txt: Cannot open: No such file or directory tar: Error exit delayed from previous errors From what I can tell the archive file is missing some directories or something. I tried deleting the file, redownloading the .tar.gz file from a different mirror and repeating the process. Same result. Thanks again for the help

    Read the article

  • dynamically include zipfilesets into a WAR

    - by Konstantin
    hi all - a bit of clumsy situation but for the moment we cannot migrate to more straight-forward project layout. We have a project called myServices and it has 2 source folders (yes, I know, but that's the way it is for now) - I'm trying to make build process a bit more flexible so we now have a property called artifact.names that will be parsed by generic build.xml and based on name, it will call either jar or war task, eg. myService-war will create a WAR file with the following zipfilesets included there: myService-war-classes, myService-war-web-inf, myService-war-meta-inf. I want to add a bit more flexibility, and allow having additional zipfilesets, eg. myService-war-etc-1,2 etc - so these will be picked up by the package target automatically. I cannot use "if" inside war target, and also ${ant.refid:myService-war-classes} property is not resolved, so I'm kind of stuck at the moment with my options - how do I dynamically include a zipfileset into a WAR? You can refer to fileset by id, but it MUST be defined then, eg. you can't have it optional on project level. Thank you. Some build.xml snippets: <target name="archive"> <for list="${artifact.names}" param="artifact"> <sequential> <echo>Packaging artifact @{artifact} of project ${project.name}</echo> <property name="display.@{artifact}.packaging" refid="@{artifact}.packaging" /> <echo>${display.@{artifact}.packaging}</echo> <propertyregex property="@{artifact}.archive.type" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\3" casesensitive="false"/> <propertyregex property="@{artifact}.base.name" input="@{artifact}" regexp="([a-zA-Z0-9]*)(\-)([ejw]ar)" select="\1" casesensitive="false"/> <echo>${@{artifact}.archive.type}</echo> <if> <then> <war destfile="${jar.dir}/${@{artifact}.base.name}.war" compress="true" needxmlfile="false"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-classes" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-web-inf" erroronmissingdir="false" /> <!-- Additional zipfilesets to package --> <zipfileset refid="@{artifact}-etc-2" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-3" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-4" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-5" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-6" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-7" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-8" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-9" erroronmissingdir="false" /> <zipfileset refid="@{artifact}-etc-10" erroronmissingdir="false" /> </war> </then> <!-- Generic JAR packaging --> <else> <jar destfile="${jar.dir}/@{artifact}.jar" compress="true"> <resources refid="@{artifact}.packaging" /> <zipfileset refid="@{artifact}-meta-inf" erroronmissingdir="false" /> </jar> </else> </if> </sequential> </for> </target>

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • CodePlex Daily Summary for Monday, April 19, 2010

    CodePlex Daily Summary for Monday, April 19, 2010New Projects8085 Microprocessor simulator: This program allows you to write 8085 programs in assembly and run those programs on your PC. It comes with lots of help, plus you can put breakpo...Additional.NET framework: The Additional.Net framework extends the functionality of the .NET framework for easier application development. It is developed in C#.Astoria Contrib: A contrib project for filling the gaps in WCF Data Services, providing missing functionality or augmenting with T4 templates, helpers, etc.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination compute...elearning Center: Đây là một ứng dụng web viết hoàn toàn bằng Sliverlight. Ứng dụng này là một dạng elearning với đầy đủ chức năng và có khả năng tương tác tối đa v...Excel VSTO SQL Server Browser: Get Data from SQL Server and put it in Excel directly. The objective is to get more control about what do you need to pull and create automatic pro...Generic Tree Structure: Generic Base Classes that helps you to create complex tree structures without writing it again and again. Simply to use Like "var Node<Folder> fold...LAN Lordz LAN Party System: The LAN Lordz LAN Party System makes it easier for medium and large size events to track their attendance, sponsors, door prizes, tournaments, and...LiteFx: O LiteFx é um framework que ajuda na implementação de DDD (anêmico ou rico) ele foi desenvolvido por Douglas Aguiar (http://twitter.com/DouglasAgui...Managed UI Flow for ASP.NET MVC Framework: If your web application getting more complex, understanding and managing of complex UI flows(pageflow of application) getting harder and harder, If...Meus Exemplos: Meus ExemplosOrchard Blueprint Theme: Orchard BluePrint is a project that provides a WYSIWYG reference implementation of a Orchard theme to help designers get started with theme design....Outlook Social Network Connector - Avatar: Avatar 是一个开源的MS Outlook的插件,豆瓣用户可以在Outlook 2010中使用豆瓣。查看一封邮件中相关的收件人、发件人的用户广播、同城活动以及豆邮。不用上豆瓣也能方便了解好友动态。这个插件使用C#, .NET 4.0 开发。API 请求认证使用OAuth 认证。 (Avat...Quadro Tree: This is Quadri tree library.Sharepoint 2010 Alert Controller: In MOSS 2007 or Sharepoint Server 2010 if you want to see your alerts by list name you should use this tool.SharePoint Web Parts: The goal of this project is to develop a set of web parts for SharePoint.Silverlight Image Cropper: This is a silverlight 4 util that makes it easy to crop out a number images of a specific resolution screen or screens. ie. an easy way to crop ...SilverlightFTP: Silverlight ftp clientsplibex: libraries for sharepoint lists manipulationStardustExtensions: Official Extensions for StardustSwim Team Manager: Swim Team Manager is designed for managing and tracking administrative and performance information for your club, school, or swim team. Swim Tea...ToDoListWpf: A To Do List, I used it to manage my work items. I am sorry for my poor English.Trance Layer: TranceLayer is a fast and flexible logging or diagnostics framework for .Net. It allows you to plug it into an existing or new application with m...Unoficcial NeoFM.hu NowPlaying: A little windows tray program. Shows what's on neofm.hu right now.WabbitStudio Z80 Software Tools: The software suite provides all of the tools you need to create high quality Z80 software in Z80 assembly language, with a focus on TI calculators....WinToolbar: Windows.Toolbar is Silverlight library that implements common widgets that allows us to build a rich toolbar control in our applications, it incor...XP-More: XP-More is a tool that helps manage Windows 7 Virtual Machines (XP Mode and any other). Specifically, it makes duplication of VMs a no brainer - no...Yodelay .NET Framework Extensions: The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include bas...New ReleasesClipoWeb: ClipoWeb 1.0: First Beta release of the ClipoWeb web applicationDDDSample.Net: 0.8: This release contains all four versions of DDDSample.Net available in previous, 0.7 and a brand new one: Layered Model version. Layered Model demon...DotNetNuke Blueprint: 00.00.02: Added to this version CSS Reset Skin version including Grids This version will soon be updated with corresponding HTML version and DNN templateEsferatec.Text.RegularExpressions: 3.5.1003.1001: first stable release of the class; the assembly file is ready to use, the documentation is complete;Excel VSTO SQL Server Browser: Sample Only: Sample without Ribbon UI, if you close the TaskPane you will no longer able to open it without restart ExcelFolder Bookmarks: Folder Bookmarks 1.5.5: This is the latest version of Folder Bookmarks (1.5.5), with the new Archive Manager and Archive Viewer. It has an installer - it will create a dir...Gardens Point LEX: Gardens Point LEX, Version 1.1.3: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.3 corre...Gardens Point Parser Generator: Gardens Point Parser Generator V1.4.0: The distribution is a zip archive which contains the binary executables, documentation, source code and examples. ChangesVersion 1.4.0 of GPPG has...HKGolden Express: HKGoldenExpress (Build 201004181455): New features: Added rating of each topic. (Note: This feature is availabe since Build 201004172120) Bug fix: Handle invalid XML character in XML...Home Access Plus+: v4.0.0.0 Beta: v4.0.0.0 Beta Change Log: Moved to using .net 4.0 New Silverlight Uploader Various .net 4 fixes and tweaks File Changes: All fixes have changedHTML Ruby: 6.21.6: Reduced performance hit on pages with heavy DOM manipulations Fixed issue where empty tags caused it to apply invalid spacing values Stop spaci...LINQ to VFP: LinqToVfp (v1.0.17.2): Modified to allow using RecNo as a primary key. This build requires IQToolkit v0.17b.Managed UI Flow for ASP.NET MVC Framework: Preview 1: The source available on this site, does not reflect the final state of the project, it is a preview of what will be shipping in the framework in th...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (2): Super minor update to accommodate the new Blend 4 RC. Only changes: The path to the Blend 4 templates changed to be "My Documents\Expression\Blend...N2 CMS: 2.0 rc: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes (1.5 -> 2.0 release candid...OpenGL ES 2.0 Compact Framework Wrapper: Sample application CAB with texturing: This took some time as it was pretty hard to get the texture loaded and setup so that it would bind to the sampler2D in the fragment shader. Featu...Orchard Blueprint Theme: 00.00.01: This is the first release of this project, still in a very alpha version. Very soon this release is to be updated with the HTML version of the them...RoughJs: RoughJsSL: This is Silverlight library's CompilerSharepoint 2010 Alert Controller: Sharepoint 2010 Alert Controller: After you download WSP file you can get help from Home PageSharePoint LogViewer: SharePoint LogViewer 2.5: Minimize log viewer to tray Get popup notification of SharePoint log events from tray Redirect log entries to event log Send email notifications on...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.1: This is a minor update which includes the following changes: Code consolidation across the whole project Additional site data captured. See solut...Stardust: Stardust 1.0: First stable version of Stardust (Build 172)StardustExtensions: Facebook Extension: Extension for stardust to upload and post images on Facebook.StardustExtensions: Facebook Extension (Source): The source code of an extension for Stardust used to post images on facebook.StardustExtensions: WPF Example: This is an example extension. Uses WPF to create a Window and say "Hello World!" Is a perfect download if want to start writing Stardust ExtensionsStardustExtensions: WPF Example Source: This is the source code of an extension that creates a Window using WPF & displays a simple text. Is great as an example of creating Stardust Exten...TFTP Server: TFTP Server 1.1 Beta installer: New MSI based installer Installs a TFTP service Supports multiple servers on different endpoints, with every server pointing to its own root di...TiledLib: TiledLib 1.1: This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab. Changes: Bug fixes in a few methods Ad...Trance Layer: TranceLayer Digger: Digger version is a beta. It is intended to be used as a demonstration of muscles while lacking a set of features that are in the docs. The set of ...uManage - Active Directory Self-Service Portal: uManage v1.2 (.NET 4.0 RTM): New Releasev1.2 Adds the Administrative Portal as well as the requirement of a MSSQL database (2005+). The Setup Wizard has also been updated to i...Unoficcial NeoFM.hu NowPlaying: NeoNotifier: First release. Aplha, but usable.VidCoder: 0.2.1: Changes: Added 2-pass encoding Fixed x264 options getting mangled during p-invoke Fixed intermittent crash with logging window open due to thre...WCF RIA Services Contrib: WCF RIA Services Contrib RC2 Release: This version is for the WCF RIA Services RC2 (SL4 RTM) release. The ApplyState has been modifed in this version to disable validation during proces...WiiCIS.NET: WiiCIS.NET v0.2: Changes... - Removal of WiimoteManager, connection must be done manually - Accelerometer orientation was originally in degrees, is now in radians -...WinToolbar: WinToolbar Source code plus sample: This zip file contains the current version source code and libraries plus a testrunner (sample app).XP-More: 0.9 (Beta): Most of the functionality is in place, final polishing will be done soon.Most Popular ProjectsFacebook Developer ToolkitWSPBuilder (SharePoint WSP tool)QuickGraph, Graph Data Structures And Algorithms for .NetPerformance Analysis of Logs (PAL) Toolpatterns & practices: Team Development with Visual Studio Team Foundation ServerTFS Integration Platformpatterns & practices: Performance Testing Guidance for Web Applicationspatterns & practices: Enterprise Library ContribJSON ViewerManaged Wifi APIMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardIonics Isapi Rewrite FilterFarseer Physics EngineMVVM Light ToolkitjQuery Library for SharePoint Web ServicesN2 CMSCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NET

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • Microsoft silverlight 5.0 features for developers

    - by Jalpesh P. Vadgama
    Recently on Silverlight 5.0 firestarter event ScottGu has announced road map for Silverlight 5.0. There will be lots of features that will be there in silverlight 5.0 but here are few glimpses of Silverlight 5.0 Features. Improved Data binding support and Better support for MVVM: One of the greatest strength of Silverlight is its data binding. Microsoft is going to enhanced data binding by providing more ability to debug it. Developer will able to debug the binding expression and other stuff in Siverlight 5.0. Its also going to provide Ancestor Relative source binding which will allow property to bind with container control. MVVM pattern support will also be enhanced. Performance and Speed Enhancement: Now silverlight 5.0 will have support for 64bit browser support. So now you can use that silverlight application on 64 bit platform also. There is no need to take extra care for it.It will also have faster startup time and greater support for hardware acceleration. It will also provide end to end support for hard acceleration features of IE 9. More support for Out Of Browser Application: With Siverlight 4.0 Microsoft has announced new features called out of browser application and it has amazed lots of developer because now possibilities are unlimited with it. Now in silverlight 5.0 Out Of Browser application will have ability to Create Manage child windows just like windows forms or WPF Application. So you can fill power of desktop application with your out of browser application. Testing Support with Visual Studio 2010: Microsoft is going to add automated UI Testing support with Visual Studio 2010 with silverlight 5.0. So now we can test UI of Silverlight much faster. Better Support for RIA Services: RIA Services allows us to create N-tier application with silverlight via creating proxy classes on client and server both side. Now it will more features like complex type support, Custom type support for MVVM(Model View View Model) pattern. WCF Enhancements: There are lots of enhancement with WCF but key enhancement will WSTrust support. Text and Printing Support: Silverlight 5.0 will support vector base graphics. It will also support multicolumn text flow and linked text containers. It will full open type support,Postscript vector enhancement. Improved Power Enhancement: This will prevent screensaver from activating while you are watching videos on silverlight. Silverlight 5.0 is going add that smartness so it can determine while you are going to watch video and while you are not going watch videos. Better support for graphics: Silverlight 5.0 will provide in-depth support for 3D API. Now 3D rendering support is more enhancement in silverlight and 3D graphics can be rendered easily. You can find more details on following links and also don’t forgot to view silverlight firestarter keynot video of scottgu. http://www.silverlight.net/news/events/firestarter-labs/ http://blogs.msdn.com/b/katriend/archive/2010/12/06/silverlight-5-features-firestarter-keynote-and-sessions-resources.aspx http://weblogs.asp.net/scottgu/archive/2010/12/02/announcing-silverlight-5.aspx http://www.silverlight.net/news/events/firestarter/ http://www.microsoft.com/silverlight/future/ Hope this will help you. Stay tuned!!!.

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • Keepin’ It Simple with StorageTek SL150

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are your customers archive and data protection environments getting out of hand?  Are they looking for a little simplicity in their lives? How about some scalability? Or are they looking for a way to save on capital and operational expenses? If you answered yes to any of these, then  Oracle's new StorageTek SL150 Modular Tape Library is the product for you. It beats the competition in terms of simplicity, scalability and savings, and provides some seriously wallet friendly revenue opportunities for you. If the long-term service annuities on the SL150 aren’t convincing enough, then the resale margins, rebates and follow-on revenue from modular upgrades will be!  The SL150 simplifies StorageTek’s tape portfolio by replacing three products with one scalable solution that  provides an entry point for repeat business within accounts. The SL150 expands your potential storage customer base to smaller companies with low cost, simple upgrades and streamlined management that help alleviate key customer pain points. With the SL150, your customers will be able to simplify growth of their archive and data protection environments with small entry configurations and 10x growth, something that would require multiple box swaps across up to three product categories with competitive products. With the SL150, Oracle can help you provide greater customer satisfaction with  Simplicity, Scalability and Savings! We know you’re probably wondering how you can get started and sell this new and magnificent product… Well, look no further because the only thing you need to do is complete the SL150 Guided Learning Paths (GLPs). For some extra insight, watch the video below on the new StorageTek SL150 modular tape library, and don’t forget to ‘tweet’ this post, and share it on Facebook to spread the good news! Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wishing you Simplicity, Scalability and Savings, The OPN Communications Team

    Read the article

  • Assign highest priority to my local repository

    - by Anwar Shah
    Original question was : "How to assign highest priority to local repository without using sources.list file" I have setup a local repository with packages I downloaded. I use it to avoid downloading the same packages over the Internet, when I need to reinstall my Ubuntu. It is a basic repository, created with apt-ftparchive packages . > Packages. I made this a trusted repository to avoid "unauthenticated repository" warning. (When you have a untrusted repository, apt or synaptic try to download the same packages over the Internet, 'cause it is trusted). I have been using this local repository for at least 1 years. But I have to always put my local repository line at the top of the sources.list file to use this. But this is annoying, since I must open a terminal and do some typing on it every time I reinstall Ubuntu, though there is a better tool software-properties-gtk. I cannot use this tool since it place the source line at the end of `sources.list. And the real problem is that, the apt or synaptic always download a package from the source which is mentioned earlier, without inspecting whether the packages are already available in the local repository. So, I have no choice but to place the local source at the top of sources.list doing terminal (I actually don't hate terminal, but I need a solution) . I have tried this method. But this does not help me. My preference file is this in /etc/apt/preferences.d/local-pin-900 Package: * Pin: release o=Local,n=ubuntu-local Pin-Priority: 900 My release file is this Origin: Local Label: Local-Ubuntu Description: Local Ubuntu Repository Codename: ubuntu-local MD5Sum: ed43222856d18f389c637ac3d7dd6f85 1043412 Packages d41d8cd98f00b204e9800998ecf8427e 0 Sources When I enable the apt-preference, the apt-cache policy correctly shows the preference, e.g. It shows the local repository has the highest priority. But when I do this sudo apt-get install <package-name>, apt tries to download it from Internet. But when I place my local-repo at the top, it installs from local repository. So, My question is - 'Is it possible to force apt to use local repository when the package is available in local repository, without explicitly placing "the local source" at the top of my repository list (e.g sources.list file) ?' Edit: output of apt-cache policy $package_name is as follows nautilus-wipe: Installed: (none) Candidate: 0.1.1-2 Version table: 0.1.1-2 0 500 http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages 900 file:/media/Main/Linux-Software/Ubuntu/Precise/ Packages It is showing that my local repository has higher preference, though it is not the one which comes first in sources.list file. Here is the output of apt-get install nautilus-wipe Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: nautilus-wipe 0 upgraded, 1 newly installed, 0 to remove and 131 not upgraded. Need to get 30.7 kB of archives. After this operation, 150 kB of additional disk space will be used. 'http://archive.ubuntu.com/ubuntu/pool/universe/n/nautilus-wipe/nautilus-wipe_0.1.1-2_i386.deb' nautilus-wipe_0.1.1-2_i386.deb 30730 MD5Sum:7d497b8dfcefe1c0b51a45f3b0466994 It is still trying to get the file from Internet, though I think it should be happy with the local one.

    Read the article

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos

    - by JuergenKress
    OFM PS6 starter kit is now available from Global Sales Engineering (GSE, formerly DSS).  OFM Starter Kit provides a basic foundation to design and develop middleware demos. It is based on plug and play architecture and designed to use optimal hardware resources.  The starter kit is easily extendable to incorporate more Oracle Fusion Middleware components. New Features Built on the "Build your own demos (POC)" concept Starter Kit comes with core OFM Components Oracle Unified Directory (OUD, SOA, WebCenter Content and WebCenter Spaces) Starter Kit is available over the Internet and is tuned for optimal performance Portable/Downloadable version of the Starter Kit will be available soon. Please check Demos Corner. For and questions/feedback please contact chandan Das or Anand Prasad. Call to Action Review the Release Notes. & Visit the GSE Website and book the “OFM 11.1.1.7.0 Base Platform” customizable instance. Further information about this platform is available on this page. This announcement will appear in the archive as number 412. Customizable Demos We are happy to announce the availability of the SOA 11.1.1.7.0 Platform.  SOA 11g (11.1.1.7) Platform is fully featured, built on Plug and Play architecture, and designed to develop best of breed SOA demos. New Features Built on the "Build your own demos" concept Fusion Middleware products SOA, BAM, OSB, OEP, OER, OSR, WebCenter Content and WebCenter Spaces are installed, configured, and tuned for better performance Demo instances are available over the Internet Portable version of the platform will be available soon. Please check Demos Corner For questions/feedback please contact Anvesh Baluguri or Anand Prasad. Call to Action Review the Release Notes & Visit the GSE Website and book the “SOA 11.1.1.7.0 Platform” customizable demo. Further information about this platform is available on this page.  This announcement will appear in the archive as number 413. To get access to the demo environment please contact OPN! Support If you need assistance or encounter any issues please submit a GSE Repository ticket or call the GSE Support Hotline for assistance. The GSE Support Hotline is available 24 hours a day, Monday through Friday, at: US/CAN: +1.650.506.8763 & EMEA: +44 118 9240808 & APAC: +65.6436.2150 & LAD: +1.650.506.8763 & Japan: +81-3-6834-6097. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: OFM,demos,sales,marketing,dss,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >