Search Results

Search found 10028 results on 402 pages for 'berkeley db'.

Page 125/402 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Ajax post not posting email address ?

    - by jeitjet
    UPDATE: It will not work in Firefox, but will work on any other browser. I even tried loading Firefox in safe mode (disabling all plugins, etc.) and still no worky. :( I'm trying to do an AJAX post (on form submission) to a separate PHP file, which works fine without trying to send an email address through the post. I'm fairly new to AJAX and pretty familiar with PHP. Here's my form and ajax call <form class="form" method="POST" name="settingsNotificationsForm"> <div class="clearfix"> <label>Email <em>*</em><small>A valid email address</small></label><input type="email" required="required" name="email" id="email" /> </div> <div class="clearfix"> <label>Email Notification<small>...when a new subscriber joins</small></label><input type="checkbox" name="subscribe_notifications" id="subscribe_notifications"> Receive an email notification with phone number when someone new subscribes to 'BIZDEMO' </div> <div class="clearfix"> <label>Email Notification<small>...when a subscriber cancels</small></label><input type="checkbox" name="unsubscribe_notifications" id="unsubscribe_notifications"> Receive an email notification with phone number when someone new unsubscribes to 'BIZDEMO' </div> <div class="action clearfix top-margin"> <button class="button button-gray" type="submit" id="notifications_submit"><span class="accept"></span>Save</button> </div> </form> and AJAX call: <script type="text/javascript"> jQuery(document).ready(function () { $("#notifications_submit").click(function() { var keyword_value = '<?php echo $keyword; ?>'; var email_address = $("input#email").val(); var subscribe_notifications_value = $("input#subscribe_notifications").attr('checked'); var unsubscribe_notifications_value = $("input#unsubscribe_notifications").attr('checked'); var data_values = { keyword : keyword_value, email : email_address, subscribe_notifications : subscribe_notifications_value, unsubscribe_notifications : unsubscribe_notifications_value }; $.ajax({ type: "POST", url: "../includes/ajax/update_settings.php", data: data_values, success: alert('Settings updated successfully!'), }); }); }); and receiving page: <?php include_once ("../db/db_connect.php"); $keyword = FILTER_INPUT(INPUT_POST, 'keyword' ,FILTER_SANITIZE_STRING); $email = FILTER_INPUT(INPUT_POST, 'email' ,FILTER_SANITIZE_EMAIL); $subscribe_notifications = FILTER_INPUT(INPUT_POST, 'subscribe_notifications' ,FILTER_SANITIZE_STRING); $unsubscribe_notifications = FILTER_INPUT(INPUT_POST, 'unsubscribe_notifications' ,FILTER_SANITIZE_STRING); $table = 'keyword_options'; $data_values = array('email' => $email, 'sub_notify' => $subscribe_notifications, 'unsub_notify' => $unsubscribe_notifications); foreach ($data_values as $name=>$value) { // See if keyword is already in database table $filter = array('keyword' => $keyword); $result = $db->find($table, $filter); if (count($result) > 0 && $new != true) { $where = array('keyword' => $keyword, 'keyword_meta' => $name); $data = array('keyword_value' => $value); $db->update($table, $where, $data); } else { $data = array('keyword' => $keyword, 'keyword_meta' => $name, 'keyword_value' => $value); $db->create($table, $data); $new = true; // If this is a new record, always go to else statement } } unset($value); Here are some weird things that happen: When I only enter text into the email field, (i.e. - abc), it works fine, posts correctly, etc. When I enter a bogus email address with the "." before the "@", it works fine When I enter a validated email address (with the "." after the "@"), the post fails. Ideas?

    Read the article

  • Java saying XML Document Not Well Formed

    - by Pyroclastic
    Hey all. Java's XML parser seems to be thinking that my XML document is not well formed following the root element, but I've validated it with several tools and they all disagree. It's probably an error in my code rather than in the document itself, I'd really appreciate any help you all could offer me. Here is my Java method: private void loadFromXMLFile(File f) throws ParserConfigurationException, IOException, SAXException { File file = f; DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db; Document doc = null; db = dbf.newDocumentBuilder(); doc = db.parse(file); doc.getDocumentElement().normalize(); String desc = ""; String due = ""; String comment = ""; NodeList tasksList = doc.getElementsByTagName("task"); for (int i = 0; i < tasksList.getLength(); i++) { NodeList attributes = tasksList.item(i).getChildNodes(); for (int j = 0; i < attributes.getLength(); j++) { Node attribute = attributes.item(i); if (attribute.getNodeName() == "description") { desc = attribute.getTextContent(); } if (attribute.getNodeName() == "due") { due = attribute.getTextContent(); } if (attribute.getNodeName() == "comment") { comment = attribute.getTextContent(); } tasks.add(new Task(desc, due, comment)); } desc = ""; due = ""; comment = ""; } } And here is the XML file I'm trying to load: <?xml version="1.0"?> <tasklist> <task> <description>Task 1</description> <due>Due date 1</due> <comment>Comment 1</comment> <completed>false</completed> </task> <task> <description>Task 2</description> <due>Due date 2</due> <comment>Comment 2</comment> <completed>false</completed> </task> <task> <description>Task 3</description> <due>Due date 3</due> <comment>Comment 3</comment> <completed>true</completed> </task> </tasklist> And here is the error message java is throwing for me: run: [Fatal Error] tasks.xml:28:3: The markup in the document following the root element must be well-formed. May 17, 2010 6:07:02 PM todolist.TodoListGUI SEVERE: null org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:239) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208) at todolist.TodoListGUI.loadFromXMLFile(TodoListGUI.java:199) at todolist.TodoListGUI.(TodoListGUI.java:42) at todolist.Main.main(Main.java:25) BUILD SUCCESSFUL (total time: 19 seconds) For reference TodoListGUI.java:199 is doc = db.parse(file); If context is helpful to anyone here, I'm trying to write a simple GUI application to manage a todo list that can read and write to and from XML files defining the tasks. Any advice is appreciated!

    Read the article

  • replaceAll() method using parameter from text file

    - by Herman Plani Ginting
    i have a collection of raw text in a table in database, i need to replace some words in this collection using a set of words. i put all the term to be replace and its substitutes in a text file as below min=admin lelet=lambat lemot=lambat nii=nih ntu=itu and so on. i have successfully initiate a variabel of File and Scanner to read the collection of the term and its substitutes. i loop all the dataset and save the raw text in a string in the same loop i loop all the term collection and save its row to a string name 'pattern', and split the pattern into two string named 'term' and 'replacer' in this loop i initiate a new string which its value is the string from the dataset modified by replaceAll(term,replacer) end loop for term collection then i insert the new string to another table in database end loop for dataset i do it manualy as below replaceAll("min","admin") and its works but its really something to code it manually for almost 2000 terms to be replace it. anyone ever face this kind of really something.. i really need a help now desperate :( package sentimenrepo; import javax.swing.*; import java.sql.*; import java.io.*; //import java.util.HashMap; import java.util.Scanner; //import java.util.Map; /** * * @author herman */ public class synonimReplaceV2 extends SwingWorker { protected Object doInBackground() throws Exception { new skripsisentimen.sentimenttwitter().setVisible(true); Integer row = 0; File synonimV2 = new File("synV2/catatan_kata_sinonim.txt"); String newTweet = ""; DB db = new DB(); Connection conn = db.dbConnect("jdbc:mysql://localhost:3306/tweet", "root", ""); try{ Statement select = conn.createStatement(); select.executeQuery("select * from synonimtweet"); ResultSet RS = select.getResultSet(); Scanner scSynV2 = new Scanner(synonimV2); while(RS.next()){ row++; String no = RS.getString("no"); String tweet = " "+ RS.getString("tweet"); String published = RS.getString("published"); String label = RS.getString("label"); clean2 cleanv2 = new clean2(); newTweet = cleanv2.cleanTweet(tweet); try{ Statement insert = conn.createStatement(); insert.executeUpdate("INSERT INTO synonimtweet_v2(no,tweet,published,label) values('" +no+"','"+newTweet+"','"+published+"','"+label+"')"); String current = skripsisentimen.sentimenttwitter.txtAreaResult.getText(); skripsisentimen.sentimenttwitter.txtAreaResult.setText(current+"\n"+row+"original : "+tweet+"\n"+newTweet+"\n______________________\n"); skripsisentimen.sentimenttwitter.lblStat.setText(row+" tweet read"); skripsisentimen.sentimenttwitter.txtAreaResult.setCaretPosition(skripsisentimen.sentimenttwitter.txtAreaResult.getText().length() - 1); }catch(Exception e){ skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage()); } skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage()); } }catch(Exception e){ skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage()); } return row; } class clean2{ public clean2(){} public String cleanTweet(String tweet){ File synonimV2 = new File("synV2/catatan_kata_sinonim.txt"); String pattern = ""; String term = ""; String replacer = ""; String newTweet=""; try{ Scanner scSynV2 = new Scanner(synonimV2); while(scSynV2.hasNext()){ pattern = scSynV2.next(); term = pattern.split("=")[0]; replacer = pattern.split("=")[1]; newTweet = tweet.replace(term, replacer); } }catch(Exception e){ e.printStackTrace(); } System.out.println(newTweet+"\n"+tweet); return newTweet; } } }

    Read the article

  • mine phrases (up to 3 words) from a given text

    - by DS_web_developer
    I asked before for a simple solution to my problem (using sphinx search service) but I got nowhere... someone has kindly provided me with this code <?php /** * $Project: GeoGraph $ * $Id$ * * GeoGraph geographic photo archive project * This file copyright (C) 2005 Barry Hunter ([email protected]) * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /** * Provides the methods for updating the worknet tables * * @package Geograph * @author Barry Hunter <[email protected]> * @version $Revision$ */ function addTwoLetterPhrase($phrase) { global $w2; $w2[$phrase] = (isset($w2[$phrase]))?($w2[$phrase]+1):1; } function addThreeLetterPhrase($phrase) { global $w3; $w3[$phrase] = (isset($w3[$phrase]))?($w3[$phrase]+1):1; } function updateWordnet(&$db,$text,$field,$id) { global $w1,$w2,$w3; $alltext = strtolower(preg_replace('/\W+/',' ',str_replace("'",'',$text))); if (strlen($text)< 1) return; $words = preg_split('/ /',$alltext); $w1 = array(); $w2 = array(); $w3 = array(); //build a list of one word phrases foreach ($words as $word) { $w1[$word] = (isset($w1[$word]))?($w1[$word]+1):1; } //build a list of two word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+)/e','addTwoLetterPhrase("$1 $2")',$text); //build a list of three word phrases $text = $alltext; $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); $text = $alltext; $text = preg_replace('/(\w+) (\w+)/','',$text,1); $text = preg_replace('/(\w+) (\w+) (\w+)/e','addThreeLetterPhrase("$1 $2 $3")',$text); foreach ($w1 as $word=>$count) { $db->Execute("insert into wordnet1 set gid = $id,words = '$word',$field = $count");// ON DUPLICATE KEY UPDATE $field=$field+$count"); } foreach ($w2 as $word=>$count) { $db->Execute("insert into wordnet2 set gid = $id,words = '$word',$field = $count"); } foreach ($w3 as $word=>$count) { $db->Execute("insert into wordnet3 set gid = $id,words = '$word',$field = $count"); } } ?> It works fine and does almost exactly what I need....... except.... it is not utf8 friendly... I mean... it splits whole words into parts (on special chars) where it shouldn't! so my guess is I should use multibyte functions instead of regular preg_replace... I tried to replace preg_replace with mb_ereg_replace but it is not working as it should... at least not for 2 and 3 words phrases any ideas?

    Read the article

  • C - Error with read() of a file, storage in an array, and printing output properly

    - by ns1
    I am new to C, so I am not exactly sure where my error is. However, I do know that the great portion of the issue lies either in how I am storing the doubles in the d_buffer (double) array or the way I am printing it. Specifically, my output keeps printing extremely large numbers (with around 10-12 digits before the decimal point and a trail of zeros after it. Additionally, this is an adaptation of an older program to allow for double inputs, so I only really added the two if statements (in the "read" for loop and the "printf" for loop) and the d_buffer declaration. I would appreciate any input whatsoever as I have spent several hours on this error. #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <unistd.h> #include <string.h> struct DataDescription { char fieldname[30]; char fieldtype; int fieldsize; }; /* ----------------------------------------------- eof(fd): returns 1 if file `fd' is out of data ----------------------------------------------- */ int eof(int fd) { char c; if ( read(fd, &c, 1) != 1 ) return(1); else { lseek(fd, -1, SEEK_CUR); return(0); } } void main() { FILE *fp; /* Used to access meta data */ int fd; /* Used to access user data */ /* ---------------------------------------------------------------- Variables to hold the description of the data - max 10 fields ---------------------------------------------------------------- */ struct DataDescription DataDes[10]; /* Holds data descriptions for upto 10 fields */ int n_fields; /* Actual # fields */ /* ------------------------------------------------------ Variables to hold the data - max 10 fields.... ------------------------------------------------------ */ char c_buffer[10][100]; /* For character data */ int i_buffer[10]; /* For integer data */ double d_buffer[10]; int i, j; int found; printf("Program for searching a mini database:\n"); /* ============================= Read in meta information ============================= */ fp = fopen("db-description", "r"); n_fields = 0; while ( fscanf(fp, "%s %c %d", DataDes[n_fields].fieldname, &DataDes[n_fields].fieldtype, &DataDes[n_fields].fieldsize) > 0 ) n_fields++; /* --- Prints meta information --- */ printf("\nThe database consists of these fields:\n"); for (i = 0; i < n_fields; i++) printf("Index %d: Fieldname `%s',\ttype = %c,\tsize = %d\n", i, DataDes[i].fieldname, DataDes[i].fieldtype, DataDes[i].fieldsize); printf("\n\n"); /* --- Open database file --- */ fd = open("db-data", O_RDONLY); /* --- Print content of the database file --- */ printf("\nThe database content is:\n"); while ( ! eof(fd) ) { /* ------------------ Read next record ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) read(fd, &i_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'F' ) read(fd, &d_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'C' ) read(fd, &c_buffer[j], DataDes[j].fieldsize); } double d; /* ------------------ Print it... ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) printf("%d ", i_buffer[j]); if ( DataDes[j].fieldtype == 'F' ) d = d_buffer[j]; printf("%lf ", d); if ( DataDes[j].fieldtype == 'C' ) printf("%s ", c_buffer[j]); } printf("\n"); } printf("\n"); printf("\n"); } Post edits output: 16777216 0.000000 107245694331284094976.000000 107245694331284094976.000000 Pi 33554432 107245694331284094976.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 Secret Key 50331648 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 -0.000000 -0.000000 The number E Expected Output: 3 rows of data ending with the number "e = 2.18281828" To reproduce the problem, the following two files need to be in the same directory as the lookup-data.c file: - db-data - db-description

    Read the article

  • png image store in database and retrieve in android 1.5

    - by hany
    hai, I am new to android. I have problem. This is my code but it will not work, the problem is in view binder. Please correct it. // this is my activity package com.android.Fruits2; import java.util.ArrayList; import java.util.HashMap; import android.app.ListActivity; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Bundle; import android.widget.SimpleAdapter; import android.widget.SimpleCursorAdapter; import android.widget.SimpleAdapter.ViewBinder; public class Fruits2 extends ListActivity { private DBhelper mDB; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // setContentView(R.layout.main); mDB = new DBhelper(this); mDB.Reset(); Bitmap img = BitmapFactory.decodeResource(getResources(), R.drawable.icon); mDB.createPersonEntry(new PersonData(img, "Harsha", 24,"mca")); String[] columns = {mDB.KEY_ID, mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}; String table = mDB.PERSON_TABLE; Cursor c = mDB.getHandle().query(table, columns, null, null, null, null, null); startManagingCursor(c); SimpleCursorAdapter adapter = new SimpleCursorAdapter(this, R.layout.data, c, new String[] {mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}, new int[] {R.id.img, R.id.name, R.id.age,R.id.study}); adapter.setViewBinder( new MyViewBinder()); setListAdapter(adapter); } } //my viewbinder package com.android.Fruits2; import android.database.Cursor; import android.graphics.BitmapFactory; import android.view.View; import android.widget.ImageView; import android.widget.SimpleCursorAdapter; public class MyViewBinder implements SimpleCursorAdapter.ViewBinder { public boolean setViewValue(View view, Cursor cursor, int columnIndex) { if( (view instanceof ImageView) ) { ImageView iv = (ImageView) view; byte[] img = cursor.getBlob(columnIndex); iv.setImageBitmap(BitmapFactory.decodeByteArray(img, 0, img.length)); return true; } return false; } } // data package com.android.Fruits2; import android.graphics.Bitmap; public class PersonData { private Bitmap bmp; private String name; private int age; private String study; public PersonData(Bitmap b, String n, int k, String v) { bmp = b; name = n; age = k; study = v; } public Bitmap getBitmap() { return bmp; } public String getName() { return name; } public int getAge() { return age; } public String getStudy() { return study; } } //dbhelper package com.android.Fruits2; import java.io.ByteArrayOutputStream; import android.content.ContentValues; import android.content.Context; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; import android.graphics.Bitmap; import android.provider.BaseColumns; public class DBhelper { public static final String KEY_ID = BaseColumns._ID; public static final String KEY_NAME = "name"; public static final String KEY_AGE = "age"; public static final String KEY_STUDY = "study"; public static final String KEY_IMG = "image"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "PersonalDB"; private static final int DATABASE_VERSION = 1; public static final String PERSON_TABLE = "Person"; private static final String CREATE_PERSON_TABLE = "create table "+PERSON_TABLE+" (" +KEY_ID+" integer primary key autoincrement, " +KEY_IMG+" blob not null, " +KEY_NAME+" text not null , " +KEY_AGE+" integer not null, " +KEY_STUDY+" text not null);"; private final Context mCtx; private boolean opened = false; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_PERSON_TABLE); } public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS "+PERSON_TABLE); onCreate(db); } } public void Reset() { openDB(); mDbHelper.onUpgrade(this.mDb, 1, 1); closeDB(); } public DBhelper(Context ctx) { mCtx = ctx; mDbHelper = new DatabaseHelper(mCtx); } private SQLiteDatabase openDB() { if(!opened) mDb = mDbHelper.getWritableDatabase(); opened = true; return mDb; } public SQLiteDatabase getHandle() { return openDB(); } private void closeDB() { if(opened) mDbHelper.close(); opened = false; } public void createPersonEntry(PersonData about) { openDB(); ByteArrayOutputStream out = new ByteArrayOutputStream(); about.getBitmap().compress(Bitmap.CompressFormat.PNG, 100, out); ContentValues cv = new ContentValues(); cv.put(KEY_IMG, out.toByteArray()); cv.put(KEY_NAME, about.getName()); cv.put(KEY_AGE, about.getAge()); cv.put(KEY_STUDY, about.getStudy()); mDb.insert(PERSON_TABLE, null, cv); closeDB(); } } //data.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> <ImageView android:id = "@+id/img" android:layout_width = "wrap_content" android:layout_height = "wrap_content" > </ImageView> <TextView android:id = "@+id/name" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" > </TextView> <TextView android:id = "@+id/age" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> <TextView android:id = "@+id/study" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> </LinearLayout> When I run this in android 1.6 and 2.1, it works. But when I run in android 1.5, not work. My application is android 1.5. Please correct and send code to me. Thank you.

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • PHP inserting Apostrophes where it shouldn't

    - by Jack W-H
    Hi folks Not too sure what's going on here as this doesn't seem like standard practise to me. But basically I have a basic database thingy going on that lets users submit code snippets. They can provide up to 5 tags for their submission. Now I'm still learning so please forgive me if this is obvious! Here's the PHP script that makes it all work (note there may be some CodeIgniter specific functions in there): function submitform() { $this->load->helper(array('form', 'url')); $this->load->library('form_validation'); $this->load->database(); $this->form_validation->set_error_delimiters('<p style="color:#FF0000;">', '</p>'); $this->form_validation->set_rules('title', 'Title', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('summary', 'Summary', 'trim|required|min_length[5]|max_length[255]|xss_clean'); $this->form_validation->set_rules('bbcode', 'Code', 'required|min_length[5]'); // No XSS clean (or <script> tags etc. are gone) $this->form_validation->set_rules('tags', 'Tags', 'trim|xss_clean|required|max_length[254]'); if ($this->form_validation->run() == FALSE) { // Do some stuff if it fails } else { // User's input values $title = $this->db->escape(set_value('title')); $summary = $this->db->escape(set_value('summary')); $code = $this->db->escape(set_value('bbcode')); $tags = $this->db->escape(set_value('tags')); // Stop things like <script> tags working $codesanitised = htmlspecialchars($code); // Other values to be entered $author = $this->tank_auth->get_user_id(); $bi1 = ""; $bi2 = ""; // This long messy bit basically sees which browsers the code is compatible with. if (isset($_POST['IE6'])) {$bi1 .= "IE6, "; $bi2 .= "1, ";} else {$bi1 .= "IE6, "; $bi2 .= "NULL, ";} if (isset($_POST['IE7'])) {$bi1 .= "IE7, "; $bi2 .= "1, ";} else {$bi1 .= "IE7, "; $bi2 .= "NULL, ";} if (isset($_POST['IE8'])) {$bi1 .= "IE8, "; $bi2 .= "1, ";} else {$bi1 .= "IE8, "; $bi2 .= "NULL, ";} if (isset($_POST['FF2'])) {$bi1 .= "FF2, "; $bi2 .= "1, ";} else {$bi1 .= "FF2, "; $bi2 .= "NULL, ";} if (isset($_POST['FF3'])) {$bi1 .= "FF3, "; $bi2 .= "1, ";} else {$bi1 .= "FF3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA3'])) {$bi1 .= "SA3, "; $bi2 .= "1, ";} else {$bi1 .= "SA3, "; $bi2 .= "NULL, ";} if (isset($_POST['SA4'])) {$bi1 .= "SA4, "; $bi2 .= "1, ";} else {$bi1 .= "SA4, "; $bi2 .= "NULL, ";} if (isset($_POST['CHR'])) {$bi1 .= "CHR, "; $bi2 .= "1, ";} else {$bi1 .= "CHR, "; $bi2 .= "NULL, ";} if (isset($_POST['OPE'])) {$bi1 .= "OPE, "; $bi2 .= "1, ";} else {$bi1 .= "OPE, "; $bi2 .= "NULL, ";} if (isset($_POST['OTH'])) {$bi1 .= "OTH, "; $bi2 .= "1, ";} else {$bi1 .= "OTH, "; $bi2 .= "NULL, ";} // $b1 is $bi1 without the last two characters (, ) which would cause a query error $b1 = substr($bi1, 0, -2); $b2 = substr($bi2, 0, -2); // :::::::::::THIS IS WHERE THE IMPORTANT STUFF IS, STACKOVERFLOW READERS:::::::::: // Split up all the words in $tags into individual variables - each tag is seperated with a space $pieces = explode(" ", $tags); // Usage: // echo $pieces[0]; // piece1 etc $ti1 = ""; $ti2 = ""; // Now we'll do similar to what we did with the compatible browsers to generate a bit of a query string if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "$pieces[0], ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} if ($pieces[1]!=NULL) {$ti1 .= "tag2, "; $ti2 .= "$pieces[1], ";} else {$ti1 .= "tag2, "; $ti2 .= "NULL, ";} if ($pieces[2]!=NULL) {$ti1 .= "tag3, "; $ti2 .= "$pieces[2], ";} else {$ti1 .= "tag3, "; $ti2 .= "NULL, ";} if ($pieces[3]!=NULL) {$ti1 .= "tag4, "; $ti2 .= "$pieces[3], ";} else {$ti1 .= "tag4, "; $ti2 .= "NULL, ";} if ($pieces[4]!=NULL) {$ti1 .= "tag5, "; $ti2 .= "$pieces[4], ";} else {$ti1 .= "tag5, "; $ti2 .= "NULL, ";} $t1 = substr($ti1, 0, -2); $t2 = substr($ti2, 0, -2); $sql = "INSERT INTO code (id, title, author, summary, code, date, $t1, $b1) VALUES ('', $title, $author, $summary, $codesanitised, NOW(), $t2, $b2)"; $this->db->query($sql); $this->load->view('subviews/template/headerview'); $this->load->view('subviews/template/menuview'); $this->load->view('subviews/template/sidebar'); $this->load->view('thanksforsubmission'); $this->load->view('subviews/template/footerview'); } } Sorry about that boring drivel of code there. I realise I probably have a few bad practises in there - please point them out if so. This is what the outputted query looks like (it results in an error and isn't queried at all): A Database Error Occurred Error Number: 1136 Column count doesn't match value count at row 1 INSERT INTO code (id, title, author, summary, code, date, tag1, tag2, tag3, tag4, tag5, IE6, IE7, IE8, FF2, FF3, SA3, SA4, CHR, OPE, OTH) VALUES ('', 'test2', 1, 'test2', 'test2 ', NOW(), 'test2, test2, test2, test2, test2', NULL, NULL, 1, 1, 1, 1, 1, 1, 1, NULL) You'll see at the bit after NOW(), 'test2, test2, test2, test2, test2' - I never asked it to put all that in apostrophes. Did I? What I could do is put each of those lines like this: if ($pieces[0]!=NULL) {$ti1 .= "tag1, "; $ti2 .= "'$pieces[0]', ";} else {$ti1 .= "tag1, "; $ti2 .= "NULL, ";} With single quotes around $pieces[0] etc. - but then my problem is that this kinda fails when the user only enters 4 tags, or 3, or whatever. Sorry if that's the worst phrased question in history, I tried, but my brain has turned to mush. Thanks for your help! Jack

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • MVC 3 ModelView passing parameters between view & controller

    - by Tobias Vandenbempt
    I've been playing with MVC 3 in a test project and have the following issue. I have Group & Subscriber entities and those are coupled through a SubscriberGroup table. Using the DetailView of Group I open a view of SubscriberGroup containing all subscribers. This list has the option to filter. So far it all works, however when I call the AddToGroup method on the controller it fails. Specifically it goes into the method but doesn't pass the subscriberCheckedModels list. Am I doing something wrong? View: SubscriberGroup Index.aspx <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.master" Inherits="System.Web.Mvc.ViewPage<Mail.Models.SubscriberCheckedListViewModel>" %> … <h2 class="common-box-title"> Add Subscribers to Group</h2> <p> <% using (Html.BeginForm("Index", "SubscriberGroup")) { %> <input name="filter" id="filter" type="text" /> <input type="submit" value="Search" /> <%} %> </p> <% using (Html.BeginForm("AddToGroup", "SubscriberGroup", Model,FormMethod.Get, null)) { %> <fieldset> <div style="display: inline-block; width: 70%; vertical-align: top;"> <% if (Model.subscribers.Count() != 0) { %> <table class="hor-minimalist-b"> <tr> <th> Add To Group </th> <th> Full Name </th> <th> Email </th> <th> Customer </th> </tr> <% foreach (var item in Model.subscribers) { %> <tr> <td> <%= Html.CheckBoxFor(modelItem => item.AddToGroup)%> </td> <td> <%= Html.DisplayFor(modelItem => item.subscriber.LastName)%> <%= Html.ActionLink(item.subscriber.FirstName + " " + item.subscriber.LastName, "Details", new { id = item.subscriber.SubscriberID })%> </td> <td> <%: Html.DisplayFor(modelItem => item.subscriber.Email)%> </td> <td> <%: Html.DisplayFor(modelItem => item.subscriber.Customer.Company)%> <%= Html.HiddenFor(modelItem => item.subscriber) %> </td> </tr> <% } %> <% ViewBag.subscribers = Model.subscribers; %> probeersel <%= Html.HiddenFor(model => model.subscribers) %> probeersel </table> <%} %> <%else { %> <p> No subscribers found.</p> <%} %> <input type="submit" value="Add Subscribers" /> </div> </fieldset> <%} %> Controller: SubscriberGroupController using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Security; using Mail.Models; namespace Mail.Controllers { public class SubscriberGroupController : Controller { private int groupID; private MailDBEntities db = new MailDBEntities(); // // GET: /SubscriberGroup/ public ActionResult Index(int id) { groupID = id; MembershipUser myObject = Membership.GetUser(); Guid UserID = Guid.Parse(myObject.ProviderUserKey.ToString()); UserCustomer usercustomer = db.UserCustomers.Single(s => s.UserID == UserID); var subscribers = from subscriber in db.Subscribers where (subscriber.CustomerID == usercustomer.CustomerID) | (subscriber.CustomerID == 0) select new SubscriberCheckedModel { subscriber = subscriber, AddToGroup = false }; SubscriberCheckedListViewModel test = new SubscriberCheckedListViewModel(); test.subscribers = subscribers; return View(test); } [HttpPost] public ActionResult Index(string filter) { MembershipUser myObject = Membership.GetUser(); Guid UserID = Guid.Parse(myObject.ProviderUserKey.ToString()); UserCustomer usercustomer = db.UserCustomers.Single(s => s.UserID == UserID); var subscribers2 = from subscriber in db.Subscribers where ((subscriber.FirstName.Contains(filter)|| subscriber.LastName.Contains(filter)) && (subscriber.CustomerID == usercustomer.CustomerID || subscriber.CustomerID == 0)) select new SubscriberCheckedModel { subscriber = subscriber, AddToGroup = false }; SubscriberCheckedListViewModel test = new SubscriberCheckedListViewModel(); test.subscribers = subscribers2.ToList(); return View(test); } [HttpPost] public ActionResult AddToGroup(SubscriberCheckedListViewModel test) { //test is null return RedirectToAction("Details", "Group", new { id = groupID }); } } } ViewModel: SubscriberGroupModel using System.Collections.Generic; using Mail; namespace Mail.Models { public class SubscriberCheckedModel { public Subscriber subscriber { get; set; } public bool AddToGroup { get; set; } } public class SubscriberCheckedListViewModel { public IEnumerable<SubscriberCheckedModel> subscribers { get; set; } } }

    Read the article

  • How can I resolve Hibernate 3's ConstraintViolationException when updating a Persistent Entity's Col

    - by Tim Visher
    I'm trying to discover why two nearly identical class sets are behaving different from Hibernate 3's perspective. I'm fairly new to Hibernate in general and I'm hoping I'm missing something fairly obvious about the mappings or timing issues or something along those lines but I spent the whole day yesterday staring at the two sets and any differences that would lead to one being able to be persisted and the other not completely escaped me. I appologize in advance for the length of this question but it all hinges around some pretty specific implementation details. I have the following class mapped with Annotations and managed by Hibernate 3.? (if the specific specific version turns out to be pertinent, I'll figure out what it is). Java version is 1.6. ... @Embeddable public class JobStateChange implements Comparable<JobStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = JobState.FIELD_LENGTH) private JobState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; public JobStateChange() { } @Override public int compareTo(final JobStateChange o) { return this.date.compareTo(o.date); } @Override public boolean equals(final Object obj) { if (this == obj) { return true; } else if (!(obj instanceof JobStateChange)) { return false; } JobStateChange candidate = (JobStateChange) obj; return this.state == candidate.state && this.actingUser.equals(candidate.getUser()) && this.date.equals(candidate.getDate()); } @Override public int hashCode() { return this.state.hashCode() + this.actingUser.hashCode() + this.date.hashCode(); } } It is mapped as a Hibernate CollectionOfElements in the class Job as follows: ... @Entity @Table( name = "job", uniqueConstraints = { @UniqueConstraint( columnNames = { "agency", //Job Name "payment_type", //Job Name "payment_file", //Job Name "date_of_payment", "payment_control_number", "truck_number" }) }) public class Job implements Serializable { private static final long serialVersionUID = -1131729422634638834L; ... @org.hibernate.annotations.CollectionOfElements @JoinTable(name = "job_state", joinColumns = @JoinColumn(name = "job_id")) @Sort(type = SortType.NATURAL) private final SortedSet<JobStateChange> stateChanges = new TreeSet<JobStateChange>(); ... public void advanceState( final User actor, final Date date) { JobState nextState; LOGGER.debug("Current state of {} is {}.", this, this.getCurrentState()); if (null == this.currentState) { nextState = JobState.BEGINNING; } else { if (!this.isAdvanceable()) { throw new IllegalAdvancementException(this.currentState.illegalAdvancementStateMessage); } if (this.currentState.isDivergent()) { nextState = this.currentState.getNextState(this); } else { nextState = this.currentState.getNextState(); } } JobStateChange stateChange = new JobStateChange(nextState, actor, date); this.setCurrentState(stateChange.getState()); this.stateChanges.add(stateChange); LOGGER.debug("Advanced {} to {}", this, this.getCurrentState()); } private void setCurrentState(final JobState jobState) { this.currentState = jobState; } boolean isAdvanceable() { return this.getCurrentState().isAdvanceable(this); } ... @Override public boolean equals(final Object obj) { if (obj == this) { return true; } else if (!(obj instanceof Job)) { return false; } Job otherJob = (Job) obj; return this.getName().equals(otherJob.getName()) && this.getDateOfPayment().equals(otherJob.getDateOfPayment()) && this.getPaymentControlNumber().equals(otherJob.getPaymentControlNumber()) && this.getTruckNumber().equals(otherJob.getTruckNumber()); } @Override public int hashCode() { return this.getName().hashCode() + this.getDateOfPayment().hashCode() + this.getPaymentControlNumber().hashCode() + this.getTruckNumber().hashCode(); } ... } The purpose of JobStateChange is to record when the Job moves through a series of State Changes that are outline in JobState as enums which know about advancement and decrement rules. The interface used to advance Jobs through a series of states is to call Job.advanceState() with a Date and a User. If the Job is advanceable according to rules coded in the enum, then a new StateChange is added to the SortedSet and everyone's happy. If not, an IllegalAdvancementException is thrown. The DDL this generates is as follows: ... drop table job; drop table job_state; ... create table job ( id bigint generated by default as identity, current_state varchar(25), date_of_payment date not null, beginningCheckNumber varchar(8) not null, item_count integer, agency varchar(10) not null, payment_file varchar(25) not null, payment_type varchar(25) not null, endingCheckNumber varchar(8) not null, payment_control_number varchar(4) not null, truck_number varchar(255) not null, wrapping_system_type varchar(15) not null, printer_id bigint, primary key (id), unique (agency, payment_type, payment_file, date_of_payment, payment_control_number, truck_number) ); create table job_state ( job_id bigint not null, acting_user_id bigint not null, date timestamp not null, state varchar(25) not null, primary key (job_id, acting_user_id, date, state) ); ... alter table job add constraint FK19BBD12FB9D70 foreign key (printer_id) references printer; alter table job_state add constraint FK57C2418FED1F0D21 foreign key (acting_user_id) references app_user; alter table job_state add constraint FK57C2418FABE090B3 foreign key (job_id) references job; ... The database is seeded with the following data prior to running tests ... insert into job (id, agency, payment_type, payment_file, payment_control_number, date_of_payment, beginningCheckNumber, endingCheckNumber, item_count, current_state, printer_id, wrapping_system_type, truck_number) values (-3, 'RRB', 'Monthly', 'Monthly','4501','1998-12-01 08:31:16' , '00000001','00040000', 40000, 'UNASSIGNED', null, 'KERN', '02'); insert into job_state (job_id, acting_user_id, date, state) values (-3, -1, '1998-11-30 08:31:17', 'UNASSIGNED'); ... After the database schema is automatically generated and rebuilt by the Hibernate tool. The following test runs fine up until the call to Session.flush() ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class JobDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private JobDao jobDao; @Autowired private SessionFactory sessionFactory; @Autowired private UserService userService; @Autowired private PrinterService printerService; ... @Test public void saveJob_JobAdvancedToAssigned_AllExpectedStateChanges() { //Get an unassigned Job Job job = this.jobDao.getJob(-3L); assertEquals(JobState.UNASSIGNED, job.getCurrentState()); Date advancedToUnassigned = new GregorianCalendar(1998, 10, 30, 8, 31, 17).getTime(); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); //Satisfy advancement constraints and advance job.setPrinter(this.printerService.getPrinter(-1L)); Date advancedToAssigned = new Date(); job.advanceState( this.userService.getUserByUsername("admin"), advancedToAssigned); assertEquals(JobState.ASSIGNED, job.getCurrentState()); assertEquals(advancedToUnassigned, job.getStateChange(JobState.UNASSIGNED).getDate()); assertEquals(advancedToAssigned, job.getStateChange(JobState.ASSIGNED).getDate()); //Persist to DB this.sessionFactory.getCurrentSession().flush(); ... } ... } The error thrown is SQLCODE=-803, SQLSTATE=23505: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] org.hibernate.exception.ConstraintViolationException: could not insert collection rows: [jaci.model.job.Job.stateChanges#-3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:94) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1416) at org.hibernate.action.CollectionUpdateAction.execute(CollectionUpdateAction.java:86) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:170) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027) at jaci.dao.JobDaoIntegrationTest.saveJob_JobAdvancedToAssigned_AllExpectedStateChanges(JobDaoIntegrationTest.java:98) at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160) at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233) at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333) at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217) at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197) at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97) Caused by: com.ibm.db2.jcc.b.lm: DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=1;ACI_APP.JOB_STATE, DRIVER=3.50.152 at com.ibm.db2.jcc.b.wc.a(wc.java:575) at com.ibm.db2.jcc.b.wc.a(wc.java:57) at com.ibm.db2.jcc.b.wc.a(wc.java:126) at com.ibm.db2.jcc.b.tk.b(tk.java:1593) at com.ibm.db2.jcc.b.tk.c(tk.java:1576) at com.ibm.db2.jcc.t4.db.k(db.java:353) at com.ibm.db2.jcc.t4.db.a(db.java:59) at com.ibm.db2.jcc.t4.t.a(t.java:50) at com.ibm.db2.jcc.t4.tb.b(tb.java:200) at com.ibm.db2.jcc.b.uk.Gb(uk.java:2355) at com.ibm.db2.jcc.b.uk.e(uk.java:3129) at com.ibm.db2.jcc.b.uk.zb(uk.java:568) at com.ibm.db2.jcc.b.uk.executeUpdate(uk.java:551) at org.hibernate.jdbc.NonBatchingBatcher.addToBatch(NonBatchingBatcher.java:46) at org.hibernate.persister.collection.AbstractCollectionPersister.insertRows(AbstractCollectionPersister.java:1389) Therein lies my problem… A nearly identical Class set (in fact, so identical that I've been chomping at the bit to make it a single class that serves both business entities) runs absolutely fine. It is identical except for name. Instead of Job it's Web. Instead of JobStateChange it's WebStateChange. Instead of JobState it's WebState. Both Job and Web's SortedSet of StateChanges are mapped as a Hibernate CollectionOfElements. Both are @Embeddable. Both are SortType.Natural. Both are backed by an Enumeration with some advancement rules in it. And yet when a nearly identical test is run for Web, no issue is discovered and the data flushes fine. For the sake of brevity I won't include all of the Web classes here, but I will include the test and if anyone wants to see the actual sources, I'll include them (just leave a comment). The data seed: insert into web (id, stock_type, pallet, pallet_id, date_received, first_icn, last_icn, shipment_id, current_state) values (-1, 'PF', '0011', 'A', '2008-12-31 08:30:02', '000000001', '000080000', -1, 'UNSTAGED'); insert into web_state (web_id, date, state, acting_user_id) values (-1, '2008-12-31 08:30:03', 'UNSTAGED', -1); The test: ... @ContextConfiguration(locations = { "/applicationContext-data.xml", "/applicationContext-service.xml" }) public class WebDaoIntegrationTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private WebDao webDao; @Autowired private UserService userService; @Autowired private SessionFactory sessionFactory; ... @Test public void saveWeb_WebAdvancedToNewState_AllExpectedStateChanges() { Web web = this.webDao.getWeb(-1L); Date advancedToUnstaged = new GregorianCalendar(2008, 11, 31, 8, 30, 3).getTime(); assertEquals(WebState.UNSTAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); Date advancedToStaged = new Date(); web.advanceState( this.userService.getUserByUsername("admin"), advancedToStaged); this.sessionFactory.getCurrentSession().flush(); web = this.webDao.getWeb(web.getId()); assertEquals( "Web should have moved to STAGED State.", WebState.STAGED, web.getCurrentState()); assertEquals(advancedToUnstaged, web.getState(WebState.UNSTAGED).getDate()); assertEquals(advancedToStaged, web.getState(WebState.STAGED).getDate()); assertNotNull(web.getState(WebState.UNSTAGED)); assertNotNull(web.getState(WebState.STAGED)); } ... } As you can see, I assert that the Web was reconstituted the way I expect, I advance it, flush it to the DB, and then re-get it and verify that the states are as I expect. Everything works perfectly. Not so with Job. A possibly pertinent detail: the reconstitution code works fine if I cease to map JobStateChange.data as a TIMESTAMP and instead as a DATE, and ensure that all of the StateChanges always occur on different Dates. The problem is that this particular business entity can go through many state changes in a single day and so it needs to be sorted by time stamp rather than by date. If I don't do this then I can't sort the StateChanges correctly. That being said, WebStateChange.date is also mapped as a TIMESTAMP and so I again remain absolutely befuddled as to where this error is arising from. I tried to do a fairly thorough job of giving all of the technical details of the implementation but as this particular question is very implementation specific, if I missed anything just let me know in the comments and I'll include it. Thanks so much for your help! UPDATE: Since it turns out to be important to the solution of my problem, I have to include the pertinent bits of the WebStateChange class as well. ... @Embeddable public class WebStateChange implements Comparable<WebStateChange> { @Temporal(TemporalType.TIMESTAMP) @Column(nullable = false) private Date date; @Enumerated(EnumType.STRING) @Column(nullable = false, length = WebState.FIELD_LENGTH) private WebState state; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "acting_user_id", nullable = false) private User actingUser; ... WebStateChange( final WebState state, final User actingUser, final Date date) { ExceptionUtils.illegalNullArgs(state, actingUser, date); this.state = state; this.actingUser = actingUser; this.date = new Date(date.getTime()); } @Override public int compareTo(final WebStateChange otherStateChange) { return this.date.compareTo(otherStateChange.date); } @Override public boolean equals(final Object candidate) { if (this == candidate) { return true; } else if (!(candidate instanceof WebStateChange)) { return false; } WebStateChange candidateWebState = (WebStateChange) candidate; return this.getState() == candidateWebState.getState() && this.getUser().equals(candidateWebState.getUser()) && this.getDate().equals(candidateWebState.getDate()); } @Override public int hashCode() { return this.getState().hashCode() + this.getUser().hashCode() + this.getDate().hashCode(); } ... }

    Read the article

  • Cannot Install/Start MySQL Server

    - by Peezy Bro
    Okay, I decided to migrate from MySQL Server 5.5.37 to Percona Server 5.6. I ended up removing MySQL Server by the following: sudo apt-get --purge remove mysql-server mysql-server-5.5 mysql-server-core-5.5 mysql-client mysql-client-core-5.5 mysql-common sudo apt-get autoremove sudo apt-get autoclean rm -rf /var/lib/mysql rm -rf /etc/mysql Now here is my problem, when I try to install MySQL Server 5.6 it goes through its process and when it asks me for a password, it comes up with Cannot set MySQL "root" password. After it "installs" MySQL wont start up and I get permission denied?. Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. brandon@brandon-DB:~$ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libmldbm-perl libnet-daemon-perl libplrpc-perl libsql-statement-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 upgraded, 10 newly installed, 0 to remove and 35 not upgraded. Need to get 0 B/8,955 kB of archives. After this operation, 96.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... Selecting previously unselected package mysql-common. (Reading database ... 167760 files and directories currently installed.) Preparing to unpack .../mysql-common_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libmysqlclient18:amd64. Preparing to unpack .../libmysqlclient18_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libdbi-perl. Preparing to unpack .../libdbi-perl_1.630-1_amd64.deb ... Unpacking libdbi-perl (1.630-1) ... Selecting previously unselected package libdbd-mysql-perl. Preparing to unpack .../libdbd-mysql-perl_4.025-1_amd64.deb ... Unpacking libdbd-mysql-perl (4.025-1) ... Selecting previously unselected package libterm-readkey-perl. Preparing to unpack .../libterm-readkey-perl_2.31-1_amd64.deb ... Unpacking libterm-readkey-perl (2.31-1) ... Selecting previously unselected package mysql-client-core-5.5. Preparing to unpack .../mysql-client-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-client-5.5. Preparing to unpack .../mysql-client-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-core-5.5. Preparing to unpack .../mysql-server-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-5.5. (Reading database ... 168116 files and directories currently installed.) Preparing to unpack .../mysql-server-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server. Preparing to unpack .../mysql-server_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-server (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Setting up libdbi-perl (1.630-1) ... Setting up libdbd-mysql-perl (4.025-1) ... Setting up libterm-readkey-perl (2.31-1) ... Setting up mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.19-0ubuntu6) ... No apport report written because the error message indicates its a followup error from a previous failure. Processing triggers for ureadahead (0.100.0-16) ... Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have all my database/tables dumped and on a seperate HDD. This is also a Dev Machine and not my main Production Machine. I also backed up the MySQL_Config and MySQL_Data.

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • How big can my SharePoint 2010 installation be?

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). 3 years ago, I had published “How big can my SharePoint 2007 installation be?” Well, SharePoint 2010 has significant under the covers improvements. So, how big can your SharePoint 2010 installation be? There are three kinds of limits you should know about Hard limits that cannot be exceeded by design. Configurable that are, well configurable – but the default values are set for a pretty good reason, so if you need to tweak, plan and understand before you tweak. Soft limits, you can exceed them, but it is not recommended that you do. Before you read any of the limits, read these two important disclaimers - 1. The limit depends on what you’re doing. So, don’t take the below as gospel, the reality depends on your situation. 2. There are many additional considerations in planning your SharePoint solution scalability and performance, besides just the below. So with those in mind, here goes.   Hard Limits - Zones per web app 5 RBS NAS performance Time to first byte of any response from NAS must be less than 20 milliseconds List row size 8000 bytes driven by how SP stores list items internally Max file size 2GB (default is 50MB, configurable). RBS does not increase this limit. Search metadata properties 10,000 per item crawled (pretty damn high, you’ll never need to worry about it). Max # of concurrent in-memory enterprise content types 5000 per web server, per tenant Max # of external system connections 500 per web server PerformancePoint services using Excel services as a datasource No single query can fetch more than 1 million excel cells Office Web Apps Renders One doc per second, per CPU core, per Application server, limited to a maximum of 8 cores.   Configurable Limits - Row Size Limit 6, configurable via SPWebApplication.MaxListItemRowStorage property List view lookup 8 join operations per query Max number of list items that a single operation can process at one time in normal hours 5000 Configurable via SPWebApplication.MaxItemsPerThrottledOperation   Also you get a warning at 3000, which is configurable via SPWebApplication.MaxItemsPerThrottledOperationWarningLevel   In addition, throttle overrides can be requested, throttle overrides can be disabled, and time windows can be set when throttle is disabled. Max number of list items for administrators that a single operation can process at one time in normal hours 20000 Configurable via SPWebApplication.MaxItemsPerThrottledOperationOverride Enumerating subsites 2000 Word and Powerpoint co-authoring simultaneous editors 10 (Hard limit is 99). # of webparts on a page 25 Search Crawl DBs per search service app 10 Items per crawl db 25 million Search Keywords 200 per site collection. There is a max limit of 5000, which can then be modified by editing the web.config/client.config. Concurrent # of workflows on a content db 15. Workflows running in the timer service are not counted in this limit. Further workflows are queued. Can be configured via the Set-SPFarmConfig powershell commandlet. Number of events picked by the workflow timer job and delivered to workflows 100. You can increase this limit by running additional instances of the workflow timer service. Visio services file size 50MB Visio web drawing recalculation timeout 120 seconds Configurable via – Powershell commandlet Set-SPVisioPerformance Visio services minimum and maximum cache age for data connected diagrams 0 to 24 hours. Default is 60 minutes. Configurable via – Powershell commandlet Set-SPVisioPerformance   Soft Limits - Content Databases 300 per web app Application Pools 10 per web server Managed Paths 20 per web app Content Database Size 200GB per Content DB Size of 1 site collection 100GB # of sites in a site collection 250,000 Documents in a library 30 Million, with nesting. Depends heavily on type and usage and size of documents. Items 30 million. Depends heavily on usage of items. SPGroups one SPUser can be in 5000 Users in a site collection 2 million, depends on UI, nesting, containers and underlying user store AD Principals in a SPGroup 5000 SPGroups in a site collection 10000 Search Service Instances 20 Indexed Items in Search 100 million Crawl Log entries 100 million Search Alerts 1 million per search application Search Crawled Properties 1/2 million URL removals in search 100 removals per operation User Profiles 2 million per service application Social Tags 500 million per social database Comment on the article ....

    Read the article

  • HDFC Bank's Journey to Oracle Private Database Cloud

    - by Nilesh Agrawal
    One of the key takeaways from a recent post by Sushil Kumar is the importance of business initiative that drives the transformational journey from legacy IT to enterprise private cloud. The journey that leads to a agile, self-service and efficient infrastructure with reduced complexity and enables IT to deliver services more closely aligned with business requirements. Nilanjay Bhattacharjee, AVP, IT of HDFC Bank presented a real-world case study based on one such initiative in his Oracle OpenWorld session titled "HDFC BANK Journey into Oracle Database Cloud with EM 12c DBaaS". The case study highlighted in this session is from HDFC Bank’s Lending Business Segment, which comprises roughly 50% of Bank’s top line. Bank’s Lending Business is always under pressure to launch “New Schemes” to compete and stay ahead in this segment and IT has to keep up with this challenging business requirement. Lending related applications are highly dynamic and go through constant changes and every single and minor change in each related application is required to be thoroughly UAT tested certified before they are certified for production rollout. This leads to a constant pressure in IT for rapid provisioning of UAT databases on an ongoing basis to enable faster time to market. Nilanjay joined Sushil Kumar, VP, Product Strategy, Oracle, during the Enterprise Manager general session at Oracle OpenWorld 2012. Let's watch what Nilanjay had to say about their recent Database cloud deployment. “Agility” in launching new business schemes became the key business driver for private database cloud adoption in the Bank. Nilanjay spent an hour discussing it during his session. Let's look at why Database-as-a-Service(DBaaS) model was need of the hour in this case  - Average 3 days to provision UAT Database for Loan Management Application Silo’ed UAT environment with Average 30% utilization Compliance requirement consume UAT testing resources DBA activities leads to $$ paid to SI for provisioning databases manually Overhead in managing configuration drift between production and test environments Rollout impact/delay on new business initiatives The private database cloud implementation progressed through 4 fundamental phases - Standardization, Consolidation, Automation, Optimization of UAT infrastructure. Project scoping was carried out and end users and stakeholders were engaged early on right from planning phase and including all phases of implementation. Standardization and Consolidation phase involved multiple iterations of planning to first standardize on infrastructure, db versions, patch levels, configuration, IT processes etc and with database level consolidation project onto Exadata platform. It was also decided to have existing AIX UAT DB landscape covered and EM 12c DBaaS solution being platform agnostic supported this model well. Automation and Optimization phase provided the necessary Agility, Self-Service and efficiency and this was made possible via EM 12c DBaaS. EM 12c DBaaS Self-Service/SSA Portal was setup with required zones, quotas, service templates, charge plan defined. There were 2 zones implemented - Exadata zone  primarily for UAT and benchmark testing for databases running on Exadata platform and second zone was for AIX setup to cover other databases those running on AIX. Metering and Chargeback/Showback capabilities provided business and IT the framework for cloud optimization and also visibility into cloud usage. More details on UAT cloud implementation, related building blocks and EM 12c DBaaS solution are covered in Nilanjay's OpenWorld session here. Some of the key Benefits achieved from UAT cloud initiative are - New business initiatives can be easily launched due to rapid provisioning of UAT Databases [ ~3 hours ] Drastically cut down $$ on SI for DBA Activities due to Self-Service Effective usage of infrastructure leading to  better ROI Empowering  consumers to provision database using Self-Service Control on project schedule with DB end date aligned to project plan submitted during provisioning Databases provisioned through Self-Service are monitored in EM and auto configured for Alerts and KPI Regulatory requirement of database does not impact existing project in queue This table below shows typical list of activities and tasks involved when a end user requests for a UAT database. EM 12c DBaaS solution helped reduce UAT database provisioning time from roughly 3 days down to 3 hours and this timing also includes provisioning time for database with production scale data (ranging from 250 G to 2 TB of data) - And it's not just about time to provision,  this initiative has enabled an agile, efficient and transparent UAT environment where end users are empowered with real control of cloud resources and IT's role is shifted as enabler of strategic services instead of being administrator of all user requests. The strong collaboration between IT and business community right from planning to implementation to go-live has played the key role in achieving this common goal of enterprise private cloud. Finally, real cloud is here and this cloud is accompanied with rain (business benefits) as well ! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • Can't remove the libpcap0.8 package

    - by Yogesh
    I am getting error when running apt-get remove root@System:~/Downloads# sudo apt-get remove The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. and when I ran apt-get remove -f this is what happens: root@System:~/Downloads# sudo apt-get remove -f The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# clear root@System:~/Downloads# sudo apt-get remove -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads# root@System:~/Downloads# sudo apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libpcap0.8 : Breaks: libpcap0.8:i386 (!= 1.4.0-2) but 1.5.3-2 is installed libpcap0.8:i386 : Breaks: libpcap0.8 (!= 1.5.3-2) but 1.4.0-2 is installed libpcap0.8-dev : Depends: libpcap0.8 (= 1.5.3-2) but 1.4.0-2 is installed E: Unmet dependencies. Try using -f. root@System:~/Downloads# apt-cache policy libpcap0.8:amd64 libpcap0.8 libpcap0.8-dev libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8: Installed: 1.4.0-2 Candidate: 1.5.3-2 Version table: 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages *** 1.4.0-2 0 100 /var/lib/dpkg/status libpcap0.8-dev: Installed: 1.5.3-2 Candidate: 1.5.3-2 Version table: *** 1.5.3-2 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status root@System:~/Downloads# root@System:~/Downloads# sudo apt-get -f remove libpcap0.8 libpcap0.8-dev libpcap0.8-dev:i386 libpcap0.8:i386 Reading package lists... Done Building dependency tree Reading state information... Done Package 'libpcap0.8-dev:i386' is not installed, so not removed. Did you mean 'libpcap0.8-dev'? You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libpcap-dev : Depends: libpcap0.8-dev but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). root@System:~/Downloads# sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libpcap0.8 The following packages will be upgraded: libpcap0.8 1 upgraded, 0 newly installed, 0 to remove and 365 not upgraded. 2 not fully installed or removed. Need to get 0 B/110 kB of archives. After this operation, 13.3 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 163539 files and directories currently installed.) Preparing to unpack .../libpcap0.8_1.5.3-2_amd64.deb ... Unpacking libpcap0.8:amd64 (1.5.3-2) over (1.4.0-2) ... dpkg: error processing archive /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb (--unpack): trying to overwrite shared '/usr/share/man/man7/pcap-filter.7.gz', which is different from other instances of package libpcap0.8:amd64 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: /var/cache/apt/archives/libpcap0.8_1.5.3-2_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) root@System:~/Downloads#

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Data Source Security Part 5

    - by Steve Felts
    If you read through the first four parts of this series on data source security, you should be an expert on this focus area.  There is one more small topic to cover related to WebLogic Resource permissions.  After that comes the test, I mean example, to see with a real set of configuration parameters what the results are with some concrete values. WebLogic Resource Permissions All of the discussion so far has been about database credentials that are (eventually) used on the database side.  WLS has resource credentials to control what WLS users are allowed to access JDBC resources.  These can be defined on the Policies tab on the Security tab associated with the data source.  There are four permissions: “reserve” (get a new connection), “admin”, “shrink”, and reset (plus the all-inclusive “ALL”); we will focus on “reserve” here because we are talking about getting connections.  By default, JDBC resource permissions are completely open – anyone can do anything.  As soon as you add one policy for a permission, then all other users are restricted.  For example, if I add a policy so that “weblogic” can reserve a connection, then all other users will fail to reserve connections unless they are also explicitly added.  The validation is done for WLS user credentials only, not database user credentials.  Configuration of resources in general is described at “Create policies for resource instances” http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/security/CreatePoliciesForResourceInstances.html.  This feature can be very useful to restrict what code and users can get to your database. There are the three use cases: API Use database credentials User for permission checking getConnection() True or false Current WLS user getConnection(user,password) False User/password from API getConnection(user,password) True Current WLS user If a simple getConnection() is used or database credentials are enabled, the current user that is authenticated to the WLS system is checked. If database credentials are not enabled, then the user and password on the API are used. Example The following is an actual example of the interactions between identity-based-connection-pooling-enabled, oracle-proxy-session, and use-database-credentials. On the database side, the following objects are configured.- Database users scott; jdbcqa; jdbcqa3- Permission for proxy: alter user jdbcqa3 grant connect through jdbcqa;- Permission for proxy: alter user jdbcqa grant connect through jdbcqa; The following WebLogic Data Source objects are configured.- Users weblogic, wluser- Credential mapping “weblogic” to “scott”- Credential mapping "wluser" to "jdbcqa3"- Data source descriptor configured with user “jdbcqa”- All tests are run with Set Client ID set to true (more about that below).- All tests are run with oracle-proxy-session set to false (more about that below). The test program:- Runs in servlet- Authenticates to WLS as user “weblogic” Use DB Credentials Identity based getConnection(scott,***) getConnection(weblogic,***) getConnection(jdbcqa3,***) getConnection()  true  true Identity scottClient weblogicProxy null weblogic fails - not a db user User jdbcqa3Client weblogicProxy null Default user jdbcqaClient weblogicProxy null  false  true scott fails - not a WLS user User scottClient scottProxy null jdbcqa3 fails - not a WLS user User scottClient scottProxy null  true  false Proxy for scott fails weblogic fails - not a db user User jdbcqa3Client weblogicProxy jdbcqa Default user jdbcqaClient weblogicProxy null  false  false scott fails - not a WLS user Default user jdbcqaClient scottProxy null jdbcqa3 fails - not a WLS user Default user jdbcqaClient scottProxy null If Set Client ID is set to false, all cases would have Client set to null. If this was not an Oracle thin driver, the one case with the non-null Proxy in the above table would throw an exception because proxy session is only supported, implicitly or explicitly, with the Oracle thin driver. When oracle-proxy-session is set to true, the only cases that will pass (with a proxy of "jdbcqa") are the following.1. Setting use-database-credentials to true and doing getConnection(jdbcqa3,…) or getConnection().2. Setting use-database-credentials to false and doing getConnection(wluser, …) or getConnection(). Summary There are many options to choose from for data source security.  Considerations include the number and volatility of WLS and Database users, the granularity of data access, the depth of the security identity (property on the connection or a real user), performance, coordination of various components in the software stack, and driver capabilities.  Now that you have the big picture (remember that table in part 1), you can make a more informed choice.

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • SPARC T4 ??????: SPARC T4 ??????????!!

    - by user13138700
    ?? 2011 ? 9 ?? SPARC T4 CPU ???????? SPARC T4 ????????????????2011??10?????????????????????????? ????????????????????SPARC T4 ?????????????????????????????????????????????????????????? SPARC T4 CPU ???? SPARC T4 ?????????????????????????????????? ??????????????????????4/4, 4/5, 4/6 ? 3???????? Oracle Open World 2012 ???????? Oracle Open World 2012 Tokyo ?? Oracle ?????&????? ??? Oracle Solaris ????????????·????????? SPARC&Solaris ??????????????SPARC&Solaris ????????????????????????????????????????????????????????????????????????? Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ??????????????? ????Oracle Open World 2012 Tokyo ?????????????????????????SPARC T4 ????? ????????????????? SPARC T4 ????????? SPARC T3 ????????(S2??)??????????????????????????(S3??)??????????????????? ???????" T " ???????????????(?)?????? SPARC T1/T2/T3 ???????????????????????????(????????)????????????????????????? ?SPARC T4 ????????????????????????????? ?SPARC T4 ???????DB?????????????????????????????? ???????????????? ????????????????????????????????????????????? ???? SPARC T3 ???????????????????????????2???????????? ????????????????????????????????????????????????????? ?????????????? SPARC T4 ????????????????????????????????????SPARC T4 ????????? SPARC T4 ??????????????????????????????????????????? ?????????????? T4 ??????????????????? SPARC ???????????????????????????????????????????????????????????????????&??????????? ?????????????????????????????????????????????????????????Web?????????????DB?????????????????????????????????????? (????????????) ???????????? SPARC T4 ????????????????????????????? < T4 ???????? > ??? SPARC ??(S3??)??? x5??????????????????? x2????????????????????? Crypto (?????)?????????? ?????????????????????????/???????????????? ?????? 1, 2,& 4 ??????????? < T4 ????? ??????? > 8x SPARC S3 ?? (64????/???) 4MB ?? L3 ????? (8???/16???) 8x9 ????? 4x DDR3 ??????????? @6.4Gbps 6x ?????????? @9.6Gbps 2x8 PCIe 2.0 (5GTS) 2x10Gb XAUI ??????? < S3???????????? > ALU : Arithmetic Logic Unit BRU : Branch Logic Unit FGU : Flouting-point Graphics Unit IRF : Integer Register File FRF : Flouting-point Register File WRF : Working Register File MMU : Memory Management Unit LSU : Load Store Unit Crypto(SPU) : Streaming Processing Unit TRU : Trap Logic Unit < S3????????? > ????? 8????/?? ?????? Out-of-Order ?? 16???????????????? ????????????? ???????????? ??????? ????????? 64???? ITLB ? 128???? DTLB 64KB 4??? L1 ?????????????? 128KB 8??? ???? L2 ????? < T4 ???????? vs T3 ???????? > T4 ????????????? Out-Of-Order ???? Pick ???????? In-Order ?? Pick ?????? Commit ??????? Out-Of-Order ?? Commit ?????? In-Order ?? < T4 ?????????? > ???????????vs????????????????????????????? ????????Active??????????????????? ???????????????????????? ??????????????????? < T4vsT1/T2/T3 ??????? > SPARC T4 ???? T3????????Web??????????? DB?????????????????????????????? ????????????????????SPARC T4 ?????&Solaris ?????????????(????????)??????????????????????????????????????????????????????????!!? ????Oracle Open World 2012 Tokyo ????????????????SPARC T4 ?????????????????????? 4/4, 4/5, 4/6 ?3????????????????????????????????????????????????????????????????????????????????????? ????????????????? URL http://www.oracle.com/openworld/jp-ja/exhibit/index.html

    Read the article

  • hall.dll errors

    - by Robert Elliott
    I am getting frequent BSoDs, mostly with hall.dll errors. I have Dell Inspiron laptop running Windows 7 SP1. The following file, werfault, is shown below. Can anyone help me work out what is wrong? Version=1 EventType=BlueScreen EventTime=129987824768810026 ReportType=4 Consent=1 ReportIdentifier=1c3e1c58-3b30-11e2-9074-002219f61870 IntegratorReportIdentifier=113012-32557-01 Response.type=4 DynamicSig[1].Name=OS Version DynamicSig[1].Value=6.1.7601.2.1.0.768.3 DynamicSig[2].Name=Locale ID DynamicSig[2].Value=2057 UI[2]=C:\Windows\system32\wer.dll UI[3]=Windows has recovered from an unexpected shutdown UI[4]=Windows can check online for a solution to the problem. UI[5]=&Check for solution UI[6]=&Check later UI[7]=Cancel UI[8]=Windows has recovered from an unexpected shutdown UI[9]=A problem caused Windows to stop working correctly. Windows will notify you if a solution is available. UI[10]=Close Sec[0].Key=BCCode Sec[0].Value=a Sec[1].Key=BCP1 Sec[1].Value=0000000000000000 Sec[2].Key=BCP2 Sec[2].Value=0000000000000002 Sec[3].Key=BCP3 Sec[3].Value=0000000000000000 Sec[4].Key=BCP4 Sec[4].Value=FFFFF80002C0E477 Sec[5].Key=OS Version Sec[5].Value=6_1_7601 Sec[6].Key=Service Pack Sec[6].Value=1_0 Sec[7].Key=Product Sec[7].Value=768_1 File[0].CabName=113012-32557-01.dmp File[0].Path=113012-32557-01.dmp File[0].Flags=589826 File[0].Type=2 File[0].Original.Path=C:\Windows\Minidump\113012-32557-01.dmp File[1].CabName=sysdata.xml File[1].Path=WER-75941-0.sysdata.xml File[1].Flags=589826 File[1].Type=5 File[1].Original.Path=C:\Users\Robert\AppData\Local\Temp\WER-75941-0.sysdata.xml File[2].CabName=Report.cab File[2].Path=Report.cab File[2].Flags=196608 File[2].Type=7 File[2].Original.Path=Report.cab FriendlyEventName=Shut down unexpectedly ConsentKey=BlueScreen AppName=Windows AppPath=C:\Windows\System32\WerFault.exe *********From the minidump file**** RAX = fffff88002f22150 RBX = fffffa80074141f0 RCX = 000000000000000a RDX = 0000000000000000 RSI = fffffa8007278180 RDI = 0000000000000001 R9 = 0000000000000000 R10 = fffff80002c0e477 R11 = 0000000000000000 R12 = fffffa800523e7a0 R13 = 0000000000001000 R14 = 0000000000000028 R15 = fffffa80074141f0 RBP = fffff88002f22210 RIP = fffff80002cd3fc0 RSP = fffff88002f22048 SS = 0000 GS = 002b FS = 0053 ES = 002b DS = 002b CS = 0010 Flags = 00200286 fffff800`02e99ac0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99ad0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99ae0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99af0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99b00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99b10 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99b20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99b30 00 00 00 00 00 00 00 00 00 00 00 00 04 00 00 00 ................ fffff800`02e99b40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e99b50 00 00 00 00 00 00 00 00 00 00 00 00 ............ fffff800`02e81928 00 00 00 00 .... fffff800`02e81924 00 00 00 00 .... fffff800`02e0a880 37 36 30 31 2E 31 37 39 34 34 2E 61 6D 64 36 34 7601.17944.amd64 fffff800`02e0a890 66 72 65 2E 77 69 6E 37 73 70 31 5F 67 64 72 2E fre.win7sp1_gdr. fffff800`02e0a8a0 31 32 30 38 33 30 2D 30 33 33 33 00 00 00 00 00 120830-0333..... fffff800`02e0a8b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a8c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a8d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a8e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a8f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a900 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a910 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a920 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a930 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a940 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a950 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ fffff800`02e0a960 35 36 65 38 62 61 31 33 2D 37 30 32 39 2D 34 37 56e8ba13-7029-47 fffff800`02e0a970 32 38 2D 61 35 30 36 2D 32 64 64 62 34 61 30 63 28-a506-2ddb4a0c fffff800`02c0e000 C5 0F 85 79 02 00 00 8B 9C 24 90 00 00 00 E9 A5 ...y.....$...... fffff800`02c0e010 00 00 00 44 2B C3 45 33 C9 E8 5E 14 00 00 49 3B ...D+.E3..^...I; fffff800`02c0e020 C5 74 2B 44 8B 8C 24 90 00 00 00 48 8B C8 41 8D .t+D..$....H..A. fffff800`02c0e030 51 FF 41 3B D5 76 0D 44 8B C2 49 83 E8 01 48 8B Q.A;.v.D..I...H. fffff800`02c0e040 49 08 75 F6 48 89 79 08 41 03 D9 48 8B F8 3B DD I.u.H.y.A..H..;. fffff800`02c0e050 75 08 48 8B C7 E9 26 02 00 00 48 8B 96 98 00 00 u.H...&...H..... fffff800`02c0e060 00 48 8D 84 24 90 00 00 00 44 8B C5 48 89 44 24 .H..$....D..H.D$ fffff800`02c0e070 28 44 2B C3 45 33 C9 48 8B CE 44 88 6C 24 20 E8 (D+.E3.H..D.l$ . fffff800`02c0e080 CC 14 00 00 49 3B C5 74 2B 44 8B 8C 24 90 00 00 ....I;.t+D..$... fffff800`02c0e090 00 48 8B C8 41 8D 51 FF 41 3B D5 76 0D 44 8B C2 .H..A.Q.A;.v.D.. fffff800`02c0e0a0 49 83 E8 01 48 8B 49 08 75 F6 48 89 79 08 41 03 I...H.I.u.H.y.A. fffff800`02c0e0b0 D9 48 8B F8 3B DD 74 9A 44 38 AE 28 01 00 00 0F .H..;.t.D8.(.... fffff800`02c0e0c0 85 DF 00 00 00 48 8D 44 24 30 4C 8D 8C 24 A0 00 .....H.D$0L..$.. fffff800`02c0e0d0 00 00 4C 8D 84 24 A8 00 00 00 8B D5 48 8B CE 48 ..L..$......H..H fffff800`02c0e0e0 89 44 24 20 E8 F7 1F 00 00 8B F8 89 84 24 90 00 .D$ .........$.. fffff800`02c0e0f0 00 00 41 3B C5 0F 84 83 01 00 00 4C 8B A4 24 A8 ..A;.......L..$. fffff800`02c0e100 00 00 00 44 8B 84 24 A0 00 00 00 48 8B 8E 98 00 ...D..$....H.... fffff800`02c0e110 00 00 49 8B D4 44 8B C8 E8 DB 1B 00 00 49 3B C5 ..I..D.......I;. fffff800`02c0e120 74 35 48 8B 96 98 00 00 00 48 8D 84 24 90 00 00 t5H......H..$... fffff800`02c0e130 00 41 B1 01 48 89 44 24 28 44 8B C5 48 8B CE 44 .A..H.D$(D..H..D fffff800`02c0e140 88 6C 24 20 E8 43 12 00 00 49 3B C5 0F 84 2C 01 .l$ .C...I;...,. fffff800`02c0e150 00 00 E9 29 01 00 00 48 8B 5C 24 30 49 3B DD 74 ...)...H.\$0I;.t fffff800`02c0e160 2A 4D 3B E5 74 0C 48 8B D3 49 8B CC FF 15 AE CE *M;.t.H..I...... fffff800`02c0e170 01 00 48 8B CB FF 15 95 CF 01 00 33 D2 48 8B CB ..H........3.H.. fffff800`02c0e180 FF 15 AA CE 01 00 E9 F3 00 00 00 C1 E7 0C 41 B8 ..............A. fffff800`02c0e190 01 00 00 00 49 8B CC 8B D7 FF 15 99 CE 01 00 E9 ....I........... fffff800`02c0e1a0 DA 00 00 00 2B EB 33 C9 41 B8 48 61 6C 20 8B D5 ....+.3.A.Hal .. fffff800`02c0e1b0 44 8B FD 48 C1 E2 03 FF 15 33 D4 01 00 4C 8B F0 D..H.....3...L.. fffff800`02c0e1c0 49 3B C5 0F 84 8F 00 00 00 45 8B E5 41 3B ED 76 I;.......E..A;.v fffff800`02c0e1d0 3F 4C 8B E8 BA 00 10 00 00 B9 04 00 00 00 41 B8 ?L............A. fffff800`02c0e1e0 48 61 6C 20 FF 15 06 D4 01 00 49 89 45 00 48 85 Hal ......I.E.H. fffff800`02c0e1f0 C0 74 39 48 8B C8 FF 15 BC CE 01 00 48 C1 E8 20 .t9H........H.. fffff800`02c0e200 85 C0 75 28 41 FF C4 49 83 C5 08 44 3B E5 72 C4 ..u(A..I...D;.r. fffff800`02c0e210 48 8B 8E 98 00 00 00 44 8B C5 BA 01 00 00 00 E8 H......D........ fffff800`02c0e220 58 19 00 00 4C 8B E8 48 85 C0 75 6C 45 33 ED 45 X...L..H..ulE3.E fffff800`02c0e230 3B E5 76 19 49 8B EE 48 8B 4D 00 33 D2 FF 15 ED ;.v.I..H.M.3.... fffff800`02c0e240 CD 01 00 48 83 C5 08 49 83 EC 01 75 EA 33 D2 49 ...H...I...u.3.I fffff800`02c0e250 8B CE FF 15 D8 CD 01 00 41 3B DD 76 21 8B EB 48 ........A;.v!..H fffff800`02c0e260 8B 96 98 00 00 00 48 8B 5F 08 4C 8B C7 48 8B CE ......H._.L..H.. fffff800`02c0e270 E8 2B 15 00 00 48 83 ED 01 48 8B FB 75 E1 33 C0 .+...H...H..u.3. fffff800`02c0e280 48 8B 9C 24 98 00 00 00 48 83 C4 50 41 5F 41 5E H..$....H..PA_A^ fffff800`02c0e290 41 5D 41 5C 5F 5E 5D C3 8D 4D FF 85 C9 74 0C 8B A]A\_^]..M...t.. fffff800`02c0e2a0 D1 48 83 EA 01 48 8B 40 08 75 F6 48 89 78 08 49 [email protected] fffff800`02c0e2b0 8B FD 85 ED 74 29 49 8B DE 48 8B 0B FF 15 F6 CD ....t)I..H...... fffff800`02c0e2c0 01 00 41 89 45 00 48 8B 03 48 83 C3 08 48 83 C8 ..A.E.H..H...H.. fffff800`02c0e2d0 0F 49 83 EF 01 49 89 45 10 4D 8B 6D 08 75 DA 48 .I...I.E.M.m.u.H fffff800`02c0e2e0 8B 8E 98 00 00 00 48 8D 54 24 38 48 83 C1 78 FF ......H.T$8H..x. fffff800`02c0e2f0 15 83 CD 01 00 4C 8B 9E 98 00 00 00 48 8D 4C 24 .....L......H.L$ fffff800`02c0e300 38 41 01 AB D0 00 00 00 FF 15 3A CD 01 00 33 D2 8A........:...3. fffff800`02c0e310 49 8B CE FF 15 17 CD 01 00 E9 34 FD FF FF 90 90 I.........4..... fffff800`02c0e320 90 90 90 90 45 85 C0 74 43 48 89 5C 24 08 48 89 ....E..tCH.\$.H. fffff800`02c0e330 74 24 10 57 48 83 EC 20 48 8B F1 41 8B F8 48 8B t$.WH.. H..A..H. fffff800`02c0e340 5A 08 4C 8B C2 48 8B 96 98 00 00 00 48 8B CE E8 Z.L..H......H... fffff800`02c0e350 4C 14 00 00 48 83 EF 01 48 8B D3 75 E1 48 8B 5C L...H...H..u.H.\ fffff800`02c0e360 24 30 48 8B 74 24 38 48 83 C4 20 5F C3 90 90 90 $0H.t$8H.. _.... fffff800`02c0e370 90 90 90 90 48 8B C4 48 89 58 08 48 89 68 10 48 ....H..H.X.H.h.H fffff800`02c0e380 89 70 18 48 89 78 20 41 54 41 55 4C 8B D9 4D 8B .p.H.x ATAUL..M. fffff800`02c0e390 E0 48 8B F2 B9 FF 0F 00 00 4D 85 DB 75 08 4C 8B .H.......M..u.L. fffff800`02c0e3a0 D1 40 32 FF EB 12 4D 8B 93 88 00 00 00 41 8A BB [email protected].. fffff800`02c0e3b0 91 00 00 00 49 C1 EA 0C 44 8B 44 24 38 41 8B C1 ....I...D.D$8A.. fffff800`02c0e3c0 4C 2B 4E 20 23 C1 49 C1 E9 0C 41 BD 00 10 00 00 L+N #.I...A..... fffff800`02c0e3d0 41 8B D5 41 8B E9 2B D0 8B CA 4C 39 54 EE 30 76 A..A..+...L9T.0v fffff800`02c0e3e0 04 33 C9 EB 4F 41 3B D0 73 43 4C 8D 4C EE 38 4D .3..OA;.sCL.L.8M fffff800`02c0e3f0 39 11 77 39 49 8B 59 F8 48 8D 43 01 49 3B 01 75 9.w9I.Y.H.C.I;.u fffff800`02c0e400 2C 48 8B C3 49 33 01 48 A9 00 00 F0 FF 75 1E 40 ,H..I3.H.....u.@ fffff800`02c0e410 80 FF 01 74 0C 49 33 19 48 F7 C3 F0 FF FF FF 75 ...t.I3.H......u fffff800`02c0e420 0C 41 03 CD 49 83 C1 08 41 3B C8 72 C2 41 3B C8 .A..I...A;.r.A;. fffff800`02c0e430 41 0F 47 C8 4D 85 DB 0F 84 92 00 00 00 41 80 BB A.G.M........A.. fffff800`02c0e440 28 01 00 00 00 0F 84 84 00 00 00 4C 39 54 EE 30 (..........L9T.0 fffff800`02c0e450 76 7D 8B CA 48 8D 44 EE 38 41 3B D0 73 11 4C 39 v}..H.D.8A;.s.L9 fffff800`02c0e460 10 76 0C 41 03 CD 48 83 C0 08 41 3B C8 72 EF 49 .v.A..H...A;.r.I fffff800`02c0e470 8B 44 24 18 41 3B C8 44 8B 08 4C 8B 50 08 41 0F .D$.A;.D..L.P.A. fffff800`02c0e480 47 C8 41 C1 E9 0C EB 3A 45 8B 02 41 8D 41 01 41 G.A....:E..A.A.A fffff800`02c0e490 C1 E8 0C 44 3B C0 75 2E 41 8B C0 41 33 C1 A9 00 ...D;.u.A..A3... fffff800`02c0e4a0 00 F0 FF 75 21 40 80 FF 01 74 0D 41 8B C0 41 33 [email protected] fffff800`02c0e4b0 C1 A9 F0 FF FF FF 75 0E 4D 8B 52 08 45 8B C8 41 ......u.M.R.E..A fffff800`02c0e4c0 03 D5 3B D1 72 C2 3B D1 0F 47 D1 8B C2 EB 02 8B ..;.r.;..G...... fffff800`02c0e4d0 C1 48 8B 5C 24 18 48 8B 6C 24 20 48 8B 74 24 28 .H.\$.H.l$ H.t$( fffff800`02c0e4e0 48 8B 7C 24 30 41 5D 41 5C C3 90 90 90 90 90 90 H.|$0A]A\....... fffff800`02c0e4f0 48 89 5C 24 08 48 89 6C 24 10 48 89 74 24 18 57 H.\$.H.l$.H.t$.W fffff800`02c0e500 41 54 41 55 48 83 EC 30 48 8B 5C 24 70 4D 8B E1 ATAUH..0H.\$pM.. fffff800`02c0e510 49 8B F0 8B 03 4C 8B EA 48 8B E9 89 44 24 20 E8 I....L..H...D$ . fffff800`02c0e520 50 FE FF FF 49 8B CC 89 03 49 2B 4D 20 8B F8 48 P...I....I+M ..H fffff800`02c0e530 C1 E9 0C 8B C9 49 8B 54 CD 30 49 8B CC 48 C1 E2 .....I.T.0I..H.. fffff800`02c0e540 0C 81 E1 FF 0F 00 00 48 03 D1 48 85 F6 74 72 48 .......H..H..trH fffff800`02c0e550 39 95 88 00 00 00 73 69 4C 8B 4E 18 48 8B 84 24 9.....siL.N.H..$ fffff800`02c0e560 80 00 00 00 41 8B DC 41 8B 09 81 E3 FF 0F 00 00 ....A..A........ fffff800`02c0e570 03 CB 80 7C 24 78 01 48 89 08 75 17 4D 8B C4 49 ...|$x.H..u.M..I fffff800`02c0e580 8B D5 48 8B CD C6 44 24 28 01 89 7C 24 20 E8 C5 ..H...D$(..|$ .. fffff800`02c0e590 06 00 00 8B C7 C1 EF 0C 25 FF 0F 00 00 8D 8C 18 ........%....... fffff800`02c0e5a0 FF 0F 00 00 48 8B 46 18 C1 E9 0C 03 CF 74 0C 8B ....H.F......t.. fffff800`02c0e5b0 D1 48 83 EA 01 48 8B 40 08 75 F6 48 89 46 18 EB [email protected].. fffff800`02c0e5c0 0B 48 8B 84 24 80 00 00 00 48 89 10 48 8B 5C 24 .H..$....H..H.\$ fffff800`02c0e5d0 50 48 8B 6C 24 58 48 8B 74 24 60 48 83 C4 30 41 PH.l$XH.t$`H..0A fffff800`02c0e5e0 5D 41 5C 5F C3 90 90 90 90 90 90 90 4D 85 C0 0F ]A\_........M... fffff800`02c0e5f0 84 09 01 00 00 48 8B C4 48 89 58 08 48 89 68 10 .....H..H.X.H.h. fffff800`02c0e600 48 89 70 18 48 89 78 20 41 54 41 55 41 56 48 83 H.p.H.x ATAUAVH. fffff800`02c0e610 EC 30 44 8A 64 24 78 49 8B D8 49 8B F1 4C 8B EA .0D.d$xI..I..L.. fffff800`02c0e620 4C 8B F1 49 89 58 18 41 80 FC 01 0F 84 AF 00 00 L..I.X.A........ fffff800`02c0e630 00 8B 7C 24 70 85 FF 0F 84 9F 00 00 00 4C 8B CE ..|$p........L.. fffff800`02c0e640 4C 8B C3 49 8B D5 49 8B CE 89 7C 24 20 E8 22 FD L..I..I...|$ .". fffff800`02c0e650 FF FF 48 8B CE 49 2B 4D 20 8B E8 48 C1 E9 0C 8B ..H..I+M ..H.... fffff800`02c0e660 C9 49 8B 54 CD 30 48 8B CE 48 C1 E2 0C 81 E1 FF .I.T.0H..H...... fffff800`02c0e670 0F 00 00 48 03 D1 49 39 96 88 00 00 00 73 52 4C ...H..I9.....sRL fffff800`02c0e680 8B 4B 18 4C 8B C6 49 8B D5 49 8B CE 44 88 64 24 .K.L..I..I..D.d$ fffff800`02c0e690 28 89 6C 24 20 E8 BE 05 00 00 8B C5 44 8B DE 25 (.l$ .......D..% fffff800`02c0e6a0 FF 0F 00 00 41 81 E3 FF 0F 00 00 41 8D 8C 03 FF ....A......A.... fffff800`02c0e6b0 0F 00 00 8B C5 C1 E8 0C C1 E9 0C 03 C8 48 8B 43 .............H.C fffff800`02c0e6c0 18 74 0A 48 83 E9 01 48 8B 40 08 75 F6 48 89 43 [email protected] fffff800`02c0e6d0 18 48 03 F5 2B FD 0F 85 61 FF FF FF 48 89 5B 18 .H..+...a...H.[. fffff800`02c0e6e0 48 8B 5C 24 50 48 8B 6C 24 58 48 8B 74 24 60 48 H.\$PH.l$XH.t$`H fffff800`02c0e6f0 8B 7C 24 68 48 83 C4 30 41 5E 41 5D 41 5C C3 90 .|$hH..0A^A]A\.. fffff800`02c0e700 90 90 90 90 90 90 90 90 48 89 54 24 10 53 55 56 ........H.T$.SUV fffff800`02c0e710 57 41 54 41 55 41 56 41 57 48 83 EC 58 48 8B F2 WATAUAVAWH..XH.. fffff800`02c0e720 48 8B D9 48 8D 54 24 30 48 8D 0D B9 67 02 00 45 H..H.T$0H...g..E fffff800`02c0e730 8B E1 49 8B F8 4C 89 84 24 B0 00 00 00 FF 15 35 ..I..L..$......5 fffff800`02c0e740 C9 01 00 4C 8B 2D 86 67 02 00 4C 8B 35 77 67 02 ...L.-.g..L.5wg. fffff800`02c0e750 00 48 8B C6 44 8B C6 48 2B 43 20 41 81 E0 FF 0F .H..D..H+C A.... fffff800`02c0e760 00 00 BD 00 10 00 00 48 C1 E8 0C 45 89 45 2C 8B .......H...E.E,. fffff800`02c0e770 CD 8B C0 41 2B C8 41 89 4D 28 4C 8D 4C C3 30 48 ...A+.A.M(L.L.0H fffff800`02c0e780 8B C6 48 25 00 F0 FF FF 49 89 45 20 49 89 46 20 ..H%....I.E I.F fffff800`02c0e790 45 89 46 2C 41 89 4E 28 44 89 84 24 B8 00 00 00 E.F,A.N(D..$.... fffff800`02c0e7a0 4C 89 8C 24 A0 00 00 00 45 85 E4 0F 84 90 01 00 L..$....E....... fffff800`02c0e7b0 00 48 8B 5F 10 48 81 E3 00 F0 FF FF 75 3C 8B 07 .H._.H......u<.. fffff800`02c0e7c0 48 8B 0D 49 67 02 00 44 8D 4B 01 48 C1 E8 0C 4D H..Ig..D.K.H...M fffff800`02c0e7d0 8B C6 BA 48 61 6C 20 49 89 46 30 FF 15 DF C8 01 ...Hal I.F0..... fffff800`02c0e7e0 00 48 8B D8 48 85 C0 0F 84 36 01 00 00 4C 8B 8C .H..H....6...L.. fffff800`02c0e7f0 24 A0 00 00 00 41 B7 01 EB 09 41 8B C0 48 03 D8 $....A....A..H.. fffff800`02c0e800 45 32 FF 49 8B 01 33 FF 49 89 45 30 48 8B 0D C5 E2.I..3.I.E0H... fffff800`02c0e810 66 02 00 44 8B CF 4D 8B C5 BA 48 61 6C 20 FF 15 f..D..M...Hal .. fffff800`02c0e820 9C C8 01 00 48 8B F0 48 85 C0 75 24 FF C7 83 FF ....H..H..u$.... fffff800`02c0e830 06 7C D9 48 21 44 24 20 45 33 C9 41 B8 01 EF 00 .|.H!D$ E3.A.... fffff800`02c0e840 00 48 8B D5 B9 AC 00 00 00 FF 15 A1 CA 01 00 CC .H.............. fffff800`02c0e850 8B FD 2B BC 24 B8 00 00 00 44 3B E7 41 0F 42 FC ..+.$....D;.A.B. fffff800`02c0e860 80 BC 24 C0 00 00 00 01 8B EF 44 8B C7 75 0E 48 ..$.......D..u.H fffff800`02c0e870 8B D0 48 8B CB FF 15 AD 33 02 00 EB 0B 48 8B D3 ..H.....3....H.. fffff800`02c0e880 48 8B C8 E8 C8 A6 01 00 4D 8B C5 BA 48 61 6C 20 H.......M...Hal fffff800`02c0e890 48 8B CE FF 15 47 C8 01 00 41 80 FF 01 75 11 4D H....G...A...u.M fffff800`02c0e8a0 8B C6 BA 48 61 6C 20 48 8B CB FF 15 30 C8 01 00 ...Hal H....0... fffff800`02c0e8b0 48 8B 84 24 A8 00 00 00 4C 8B 8C 24 A0 00 00 00 H..$....L..$.... fffff800`02c0e8c0 44 2B E7 48 8B BC 24 B0 00 00 00 48 03 C5 BD 00 D+.H..$....H.... fffff800`02c0e8d0 10 00 00 48 8B 7F 08 49 83 C1 08 45 33 C0 44 3B ...H..I...E3.D; fffff800`02c0e8e0 E5 48 8B C8 41 8B D4 0F 47 D5 48 81 E1 00 F0 FF .H..A...G.H..... fffff800`02c0e8f0 FF 48 89 84 24 A8 00 00 00 49 89 4D 20 41 89 55 .H..$....I.M A.U fffff800`02c0e900 28 25 FF 0F 00 00 41 89 45 2C 49 89 4E 20 41 89 (%....A.E,I.N A. fffff800`02c0e910 46 2C 41 89 56 28 48 89 BC 24 B0 00 00 00 E9 75 F,A.V(H..$.....u fffff800`02c0e920 FE FF FF 48 83 64 24 20 00 45 33 C9 41 B8 00 EF ...H.d$ .E3.A... fffff800`02c0e930 00 00 48 8B D5 B9 AC 00 00 00 FF 15 B0 C9 01 00 ..H............. fffff800`02c0e940 CC 48 8D 4C 24 30 FF 15 FC C6 01 00 48 83 C4 58 .H.L$0......H..X fffff800`02c0e950 41 5F 41 5E 41 5D 41 5C 5F 5E 5D 5B C3 90 90 90 A_A^A]A\_^][.... fffff800`02c0e960 90 90 90 90 48 89 5C 24 08 48 89 6C 24 10 48 89 ....H.\$.H.l$.H. fffff800`02c0e970 74 24 18 57 41 54 41 55 48 83 EC 50 33 C0 49 8B t$.WATAUH..P3.I. fffff800`02c0e980 F9 41 8B F0 4C 8B E2 48 8B CA 49 C7 C3 00 F0 FF .A..L..H..I..... fffff800`02c0e990 FF 45 85 C0 74 10 4C 85 59 10 74 0A 48 8B 49 08 .E..t.L.Y.t.H.I. fffff800`02c0e9a0 FF C0 3B C6 72 F0 3B C6 75 09 49 83 21 00 E9 FB ..;.r.;.u.I.!... fffff800`02c0e9b0 00 00 00 65 48 8B 04 25 20 00 00 00 33 C9 44 8B ...eH..% ...3.D. fffff800`02c0e9c0 50 24 48 8B 05 F7 64 02 00 4A 8B 2C D0 4C 8D 4D P$H...d..J.,.L.M fffff800`02c0e9d0 30 45 85 C0 74 22 4C 8B C6 4C 85 5A 10 75 0F 8B 0E..t"L..L.Z.u.. fffff800`02c0e9e0 02 FF C1 48 C1 E8 0C 49 89 01 49 83 C1 08 49 83 ...H...I..I...I. fffff800`02c0e9f0 E8 01 48 8B 52 08 75 E1 33 DB C1 E1 0C 41 B5 01 ..H.R.u.3....A.. fffff800`02c0ea00 48 21 5D 20 21 5D 2C 89 4D 28 44 38 2D 07 65 02 H!] !],.M(D8-.e. fffff800`02c0ea10 00 75 10 48 8B 05 C6 64 02 00 4A 8B 1C D0 E9 29 .u.H...d..J....) fffff800`02c0ea20 01 00 00 48 8D 0D D6 64 02 00 FF 15 50 C6 01 00 ...H...d....P... fffff800`02c0ea30 48 85 C0 0F 85 F9 00 00 00 44 8D 40 01 45 33 C9 [email protected]. fffff800`02c0ea40 33 D2 48 8B CD C7 44 24 28 20 00 00 00 21 5C 24 3.H...D$( ...!\$ fffff800`02c0ea50 20 FF 15 71 C6 01 00 4C 8B D8 48 85 C0 74 69 45 ..q...L..H..tiE fffff800`02c0ea60 32 ED 49 8B D3 85 F6 74 36 48 8B CE 49 F7 44 24 2.I....t6H..I.D$ fffff800`02c0ea70 10 00 F0 FF FF 75 1D 41 8B 44 24 10 25 EF 0F 00 .....u.A.D$.%... fffff800`02c0ea80 00 48 0B C2 48 83 C8 10 48 81 C2 00 10 00 00 49 .H..H...H......I fffff800`02c0ea90 89 44 24 10 48 83 E9 01 4D 8B 64 24 08 75 CD 48 .D$.H...M.d$.u.H fffff800`02c0eaa0 89 2F 4C 89 5F 08 48 89 5F 10 44 88 6F 30 4C 8D ./L._.H._.D.o0L. fffff800`02c0eab0 5C 24 50 49 8B 5B 20 49 8B 6B 28 49 8B 73 30 49 \$PI.[ I.k(I.s0I fffff800`02c0eac0 8B E3 41 5D 41 5C 5F C3 48 8D 54 24 30 48 8D 0D ..A]A\_.H.T$0H.. fffff800`02c0ead0 4C 64 02 00 FF 15 66 C5 01 00 48 8B 15 FF 63 02 Ld....f...H...c. fffff800`02c0eae0 00 44 8B 0D 10 64 02 00 48 8B 02 B9 01 00 00 00 .D...d..H....... fffff800`02c0eaf0 44 8B 40 18 44 3B C9 76 1E 48 83 C2 08 48 8B 02 [email protected];.v.H...H.. fffff800`02c0eb00 44 39 40 18 7D 06 44 8B 40 18 8B D9 FF C1 48 83 D9@.}[email protected]. fffff800`02c0eb10 C2 08 41 3B C9 72 E6 48 8D 4C 24 30 FF 15 0E C6 ..A;.r.H.L$0.... fffff800`02c0eb20 01 00 48 8B 05 B7 63 02 00 44 8B DB 4A 8B 1C D8 ..H...c..D..J... fffff800`02c0eb30 EB 07 83 60 1C 00 48 8B D8 F0 83 43 18 01 48 8D ...`..H....C..H. fffff800`02c0eb40 57 18 48 8D 4B 20 FF 15 F4 C4 01 00 48 8B 4B 10 W.H.K ......H.K. fffff800`02c0eb50 41 B9 01 00 00 00 4C 8B C5 BA 48 61 6C 20 FF 15 A.....L...Hal .. fffff800`02c0eb60 5C C5 01 00 4C 8B D8 48 85 C0 0F 85 F2 FE FF FF \...L..H........ fffff800`02c0eb70 48 21 44 24 20 45 33 C9 BA 00 10 00 00 B9 AC 00 H!D$ E3......... fffff800`02c0eb80 00 00 41 B8 02 EF 00 00 FF 15 62 C7 01 00 CC 90 ..A.......b..... fffff800`02c0eb90 90 90 90 90 90 90 90 90 48 89 5C 24 08 48 89 6C ........H.\$.H.l fffff800`02c0eba0 24 18 48 89 74 24 20 57 48 83 EC 20 41 80 78 30 $.H.t$ WH.. A.x0 fffff800`02c0ebb0 00 49 8B F8 8B F2 48 8B D9 BD 01 00 00 00 75 0F .I....H.......u. fffff800`02c0ebc0 49 8B 10 49 8B 48 08 FF 15 53 C4 01 00 EB 4A 4D I..I.H...S....JM fffff800`02c0ebd0 8B 00 48 8B 4F 08 BA 48 61 6C 20 FF 15 FF C4 01 ..H.O..Hal ..... fffff800`02c0ebe0 00 80 3D 30 63 02 00 00 75 2F 48 8D 4F 18 FF 15 ..=0c...u/H.O... fffff800`02c0ebf0 3C C5 01 00 48 8B 57 10 83 C8 FF F0 0F C1 42 18 <...H.W.......B. fffff800`02c0ec00 83 C0 FF 75 14 F0 0F B1 6A 1C 75 0D 48 8D 0D ED ...u....j.u.H... fffff800`02c0ec10 62 02 00 FF 15 4F C4 01 00 85 F6 74 1E 48 8B CE b....O.....t.H.. fffff800`02c0ec20 F6 43 10 10 74 0C 8B 43 10 25 EF 0F 00 00 48 89 .C..t..C.%....H. fffff800`02c0ec30 43 10 48 2B CD 48 8B 5B 08 75 E5 48 8B 5C 24 30 C.H+.H.[.u.H.\$0 fffff800`02c0ec40 48 8B 6C 24 40 48 8B 74 24 48 48 83 C4 20 5F C3 [email protected]$HH.. _. fffff800`02c0ec50 90 90 90 90 90 90 90 90 48 89 5C 24 18 48 89 4C ........H.\$.H.L fffff800`02c0ec60 24 08 55 56 57 41 54 41 55 41 56 41 57 48 83 EC $.UVWATAUAVAWH.. fffff800`02c0ec70 70 4D 8B F1 4D 8B E8 48 8B F2 4C 8B D1 44 0F 20 pM..M..H..L..D. fffff800`02c0ec80 C7 F6 42 0A 05 74 06 48 8B 5A 18 EB 2A 45 33 C9 ..B..t.H.Z..*E3. fffff800`02c0ec90 33 D2 48 8B CE 45 8D 41 01 C7 44 24 28 20 00 00 3.H..E.A..D$( .. fffff800`02c0eca0 00 83 64 24 20 00 FF 15 1C C4 01 00 4C 8B 94 24 ..d$ .......L..$ fffff800`02c0ecb0 B0 00 00 00 48 8B D8 BD 02 00 00 00 48 85 DB 75 ....H.......H..u fffff800`02c0ecc0 4A 40 3A FD 76 1F 48 21 5C 24 20 45 33 C9 BA 00 J@:.v.H!\$ E3... fffff800`02c0ecd0 10 00 00 B9 AC 00 00 00 41 B8 05 EF 00 00 FF 15 ........A....... fffff800`02c0ece0 0C C6 01 00 CC 8A 84 24 D8 00 00 00 44 8B 8C 24 .......$....D..$ fffff800`02c0ecf0 D0 00 00 00 4D 8B C6 49 8B D5 48 8B CE 88 44 24 ....M..I..H...D$ fffff800`02c0ed00 20 E8 02 FA FF FF E9 4D 01 00 00 44 8B BC 24 D0 ......M...D..$. fffff800`02c0ed10 00 00 00 BA FF 0F 00 00 41 8B CD 23 CA 41 8B C7 ........A..#.A.. fffff800`02c0ed20 C6 84 24 B8 00 00 00 00 23 C2 44 8D A4 01 FF 0F ..$.....#.D..... fffff800`02c0ed30 00 00 41 8B C7 41 C1 EC 0C C1 E8 0C 44 03 E0 44 ..A..A......D..D fffff800`02c0ed40 89 64 24 30 40 3A FD 76 41 33 C9 49 8B C6 45 85 .d$0@:.vA3.I..E. fffff800`02c0ed50 E4 74 64 48 F7 40 10 00 F0 FF FF 74 0D 48 8B 40 [email protected].@ fffff800`02c0ed60 08 FF C1 41 3B CC 72 EB EB 4D 48 83 64 24 20 00 ...A;.r..MH.d$ .

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >