Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 257/266 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Ajax post not posting email address ?

    - by jeitjet
    UPDATE: It will not work in Firefox, but will work on any other browser. I even tried loading Firefox in safe mode (disabling all plugins, etc.) and still no worky. :( I'm trying to do an AJAX post (on form submission) to a separate PHP file, which works fine without trying to send an email address through the post. I'm fairly new to AJAX and pretty familiar with PHP. Here's my form and ajax call <form class="form" method="POST" name="settingsNotificationsForm"> <div class="clearfix"> <label>Email <em>*</em><small>A valid email address</small></label><input type="email" required="required" name="email" id="email" /> </div> <div class="clearfix"> <label>Email Notification<small>...when a new subscriber joins</small></label><input type="checkbox" name="subscribe_notifications" id="subscribe_notifications"> Receive an email notification with phone number when someone new subscribes to 'BIZDEMO' </div> <div class="clearfix"> <label>Email Notification<small>...when a subscriber cancels</small></label><input type="checkbox" name="unsubscribe_notifications" id="unsubscribe_notifications"> Receive an email notification with phone number when someone new unsubscribes to 'BIZDEMO' </div> <div class="action clearfix top-margin"> <button class="button button-gray" type="submit" id="notifications_submit"><span class="accept"></span>Save</button> </div> </form> and AJAX call: <script type="text/javascript"> jQuery(document).ready(function () { $("#notifications_submit").click(function() { var keyword_value = '<?php echo $keyword; ?>'; var email_address = $("input#email").val(); var subscribe_notifications_value = $("input#subscribe_notifications").attr('checked'); var unsubscribe_notifications_value = $("input#unsubscribe_notifications").attr('checked'); var data_values = { keyword : keyword_value, email : email_address, subscribe_notifications : subscribe_notifications_value, unsubscribe_notifications : unsubscribe_notifications_value }; $.ajax({ type: "POST", url: "../includes/ajax/update_settings.php", data: data_values, success: alert('Settings updated successfully!'), }); }); }); and receiving page: <?php include_once ("../db/db_connect.php"); $keyword = FILTER_INPUT(INPUT_POST, 'keyword' ,FILTER_SANITIZE_STRING); $email = FILTER_INPUT(INPUT_POST, 'email' ,FILTER_SANITIZE_EMAIL); $subscribe_notifications = FILTER_INPUT(INPUT_POST, 'subscribe_notifications' ,FILTER_SANITIZE_STRING); $unsubscribe_notifications = FILTER_INPUT(INPUT_POST, 'unsubscribe_notifications' ,FILTER_SANITIZE_STRING); $table = 'keyword_options'; $data_values = array('email' => $email, 'sub_notify' => $subscribe_notifications, 'unsub_notify' => $unsubscribe_notifications); foreach ($data_values as $name=>$value) { // See if keyword is already in database table $filter = array('keyword' => $keyword); $result = $db->find($table, $filter); if (count($result) > 0 && $new != true) { $where = array('keyword' => $keyword, 'keyword_meta' => $name); $data = array('keyword_value' => $value); $db->update($table, $where, $data); } else { $data = array('keyword' => $keyword, 'keyword_meta' => $name, 'keyword_value' => $value); $db->create($table, $data); $new = true; // If this is a new record, always go to else statement } } unset($value); Here are some weird things that happen: When I only enter text into the email field, (i.e. - abc), it works fine, posts correctly, etc. When I enter a bogus email address with the "." before the "@", it works fine When I enter a validated email address (with the "." after the "@"), the post fails. Ideas?

    Read the article

  • Mail Server on Google Apps, Hotmail Blocks

    - by detay
    My company provides private research tools (sites) for companies. Each site has its own domain name and users. These sites need to be able to send outbound mail for notifications such as password reminders. We started using Google Apps for this purpose, but lately we are having tremendous problems with mailing, especially to hotmail. I added the SPF record as required. Mxtoolbox says my domains are not in any blacklists. I was told to contact https://postmaster.live.com/snds/addnetwork.aspx and add my network. Given that I am using google's mail servers for sending mail, should I add their IP address for registering my domain? Or should I add my server's IP even though I am not hosting the mail server on it? Here's an example of a returned message; > Delivered-To: [email protected] Received: by 10.112.1.195 with SMTP > id 3csp62757lbo; > Mon, 10 Dec 2012 10:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; > d=gmail.com; s=20120113; > h=mime-version:from:to:x-failed-recipients:subject:message-id:date > :content-type; > bh=x41JiCH75hsonh+5c/kzwIs7R8u4Hum2u396lkV3g2w=; > b=v+bBPvufqfVkNz2UrnhyyoGj1Cuwf6x/yMj0IXJgH27uJU7TqBOvbwnNHf33sQ7B9T > uGzxwryC2fmxBh72cGz1spwWK348nUp6KK73MXKnF8mpc3nPQ8Ke+EQpSPeJdq/7oZvd > scZTGpy//IiGEUDU5bJ7YPqYXQRycY5N6AF5iI4mWWwbS4opybp3IKpDDktu11p/YEEO > Fj9wnSCx3nLMXB/XZgSjjmnaluGhYNdh3JFtz83Vmr50qXTG/TuIXlkirP73GfGvIt2S > 7sy8/YdEbZR92mYe9wucditYr7MOuyjyYrZYCg+weXeTZiJX1PfBVieD+kD9MnCairnP > rQeQ== Received: by 10.14.215.197 with SMTP id e45mr52766237eep.0.1355164974784; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) MIME-Version: 1.0 Return-Path: <> Received: by 10.14.215.197 with SMTP id > e45mr65950764eep.0; Mon, 10 Dec 2012 10:42:54 -0800 (PST) From: Mail > Delivery Subsystem <[email protected]> To: > [email protected] X-Failed-Recipients: ********@hotmail.fr > Subject: Delivery Status Notification (Failure) Message-ID: > <[email protected]> Date: Mon, 10 Dec 2012 > 18:42:54 +0000 Content-Type: text/plain; charset=ISO-8859-1 > > Delivery to the following recipient failed permanently: > > *********@hotmail.fr > > Technical details of permanent failure: Message rejected. See > http://support.google.com/mail/bin/answer.py?answer=69585 for more > information. > > ----- Original message ----- > > Received: by 10.14.215.197 with SMTP id > e45mr52766228eep.0.1355164974700; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Return-Path: <[email protected]> Received: from WIN-1BBHFMJGUGS ([91.93.102.12]) > by mx.google.com with ESMTPS id l3sm26787704eem.14.2012.12.10.10.42.53 > (version=TLSv1/SSLv3 cipher=OTHER); > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Message-ID: <[email protected]> MIME-Version: 1.0 > From: [email protected] To: *********@hotmail.fr Date: Mon, 10 Dec > 2012 10:42:54 -0800 (PST) Subject: Rejoignez-nous sur achbanlek.com > Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: > base64 > > ----- End of message -----

    Read the article

  • Update transaction in SQL Server 2008 R2 from ASP.Net not working

    - by Amarus
    Hello! Even though I've been a stalker here for ages, this is the first post I'm making. Hopefully, it won't end here and more optimistically future posts might actually be me trying to give a hand to someone else, I do owe this community that much and more. Now, what I'm trying to do is simple and most probably the reason behind it not working is my own stupidity. However, I'm stumped here. I'm working on an ASP.Net website that interacts with an SQL Server 2008 R2 database. So far everything has been going okay but updating a row (or more) just won't work. I even tried copying and pasting code from this site and others but it's always the same thing. In short: No exception or errors are shown when the update command executes (it even gives the correct count of affected rows) but no changes are actually made on the database. Here's a simplified version of my code (the original had more commands and tons of parameters each, but even when it's like this it doesn't work): protected void btSubmit_Click(object sender, EventArgs e) { using (SqlConnection connection = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString)) { string commandString = "UPDATE [impoundLotAlpha].[dbo].[Vehicle]" + "SET [VehicleMake] = @VehicleMake" + " WHERE [ComplaintID] = @ComplaintID"; using (SqlCommand command = new SqlCommand(commandString, connection)) { SqlTransaction transaction = null; try { command.Connection.Open(); transaction = connection.BeginTransaction(IsolationLevel.Serializable); command.Transaction = transaction; SqlParameter complaintID = new SqlParameter("@complaintID", SqlDbType.Int); complaintID.Value = HttpContext.Current.Request.QueryString["complaintID"]; command.Parameters.Add(complaintID); SqlParameter VehicleMake = new SqlParameter("@VehicleMake", SqlDbType.VarChar, 20); VehicleMake.Value = tbVehicleMake.Text; command.Parameters.Add(VehicleMake); command.ExecuteNonQuery(); transaction.Commit(); } catch { transaction.Rollback(); throw; } finally { connection.Close(); } } } } I've tried this with the "SqlTransaction" stuff and without it and nothing changes. Also, since I'm doing multiple updates at once, I want to have them act as a single transaction. I've found that it can be either done like this or by use of the classes included in the System.Transactions namespace (CommittableTransaction, TransactionScope...). I tried all I could find but didn't get any different results. The connection string in web.config is as follows: <connectionStrings> <add name="ApplicationServices" connectionString="Data Source=localhost;Initial Catalog=ImpoundLotAlpha;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> So, tldr; version: What is the mistake that I did with that record update attempt? (Figured it out, check below if you're having a similar issue.) What is the best method to gather multiple update commands as a single transaction? Thanks in advance for any kind of help and/or suggestions! Edit: It seems that I was lacking some sleep yesterday cause this time it only took me 5 minutes to figure out my mistake. Apparently the update was working properly but I failed to notice that the textbox values were being overwritten in Page_Load. For some reason I had this part commented: if (IsPostBack) return; The second part of the question still stands. But should I post this as an answer to my own question or keep it like this?

    Read the article

  • Serious problem with WCF, GridViews, Callbacks and ExecuteReaders exceptions.

    - by barjed
    Hi, I have this problem that is driving me insane. I have a project to deliver before Thursday. Basically an app consiting of three components that communicate with each other in WCF. I have one console app and one Windows Forms app. The console app is a server that's connected to the database. You can add records to it via the Windows Forms client that connectes with the server through the WCF. The code for the client: namespace BankAdministratorClient { [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Single, UseSynchronizationContext = false)] public partial class Form1 : Form, BankServverReference.BankServerCallback { private BankServverReference.BankServerClient server = null; private SynchronizationContext interfaceContext = null; public Form1() { InitializeComponent(); interfaceContext = SynchronizationContext.Current; server = new BankServverReference.BankServerClient(new InstanceContext(this), "TcpBinding"); server.Open(); server.Subscribe(); refreshGridView(""); } public void refreshClients(string s) { SendOrPostCallback callback = delegate(object state) { refreshGridView(s); }; interfaceContext.Post(callback, s); } public void refreshGridView(string s) { try { userGrid.DataSource = server.refreshDatabaseConnection().Tables[0]; } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } private void buttonAdd_Click(object sender, EventArgs e) { server.addNewAccount(Int32.Parse(inputPIN.Text), Int32.Parse(inputBalance.Text)); } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { try { server.Unsubscribe(); server.Close(); }catch{} } } } The code for the server: namespace SSRfinal_tcp { class Program { static void Main(string[] args) { Console.WriteLine(MessageHandler.dataStamp("The server is starting up")); using (ServiceHost server = new ServiceHost(typeof(BankServer))) { server.Open(); Console.WriteLine(MessageHandler.dataStamp("The server is running")); Console.ReadKey(); } } } [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode = InstanceContextMode.PerCall, IncludeExceptionDetailInFaults = true)] public class BankServer : IBankServerService { private static DatabaseLINQConnectionDataContext database = new DatabaseLINQConnectionDataContext(); private static List<IBankServerServiceCallback> subscribers = new List<IBankServerServiceCallback>(); public void Subscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (!subscribers.Contains(callback)) subscribers.Add(callback); Console.WriteLine(MessageHandler.dataStamp("A new Bank Administrator has connected")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to connect")); } } public void Unsubscribe() { try { IBankServerServiceCallback callback = OperationContext.Current.GetCallbackChannel<IBankServerServiceCallback>(); if (subscribers.Contains(callback)) subscribers.Remove(callback); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has been signed out from the connection list")); } catch { Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has failed to sign out from the connection list")); } } public DataSet refreshDatabaseConnection() { var q = from a in database.GetTable<Account>() select a; DataTable dt = q.toTable(rec => new object[] { q }); DataSet data = new DataSet(); data.Tables.Add(dt); Console.WriteLine(MessageHandler.dataStamp("A Bank Administrator has requested a database data listing refresh")); return data; } public void addNewAccount(int pin, int balance) { Account acc = new Account() { PIN = pin, Balance = balance, IsApproved = false }; database.Accounts.InsertOnSubmit(acc); database.SubmitChanges(); database.addNewAccount(pin, balance, false); subscribers.ForEach(delegate(IBankServerServiceCallback callback) { callback.refreshClients("New operation is pending approval."); }); } } } This is really simple and it works for a single window. However, when you open multiple instances of the client window and try to add a new record, the windows that is performing the insert operation crashes with the ExecuteReader error and the " requires an open and available connection. the connection's current state is connecting" bla bla stuff. I have no idea what's going on. Please advise.

    Read the article

  • GPO errors filling up event viewer

    - by burntehsky
    there have been a few issues with the server i have been working on i check the event viewer and it is filled with the errors below i was not sure how to go about fixing this i looked in the path where the file is and it is there Windows cannot access the file gpt.ini for GPO CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=ISPHOME,DC=NET. The file must be present at the location <\\isphome.net\\sysvol\ISPHOME.NET\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini>. (The network location cannot be reached. For information about network troubleshooting, see Windows Help. ). Group Policy processing aborted. C:\Documents and Settings\Dimitri>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : ispserver Primary Dns Suffix . . . . . . . : ISPHOME.NET Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : ISPHOME.NET Ethernet adapter Local Area Connection 3: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Intel(R) PRO/100 VE Network Connection #2 Physical Address. . . . . . . . . : 00-07-E9-AA-3E-C3 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.1.50 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 *dcdiag /c /v is below* Summary of test results for DNS servers used by the above domain contro llers: DNS server: 192.168.1.1 (<name unavailable>) All tests passed on this DNS server This is a valid DNS server DNS server: 192.168.1.50 (<name unavailable>) All tests passed on this DNS server This is a valid DNS server Name resolution is funtional. _ldap._tcp SRV record for the fores t root domain is registered Summary of DNS test results: Auth Basc Forw Del Dyn RReg Ext ________________________________________________________________ Domain: ISPHOME.NET ispserver PASS FAIL PASS PASS PASS PASS n/a ......................... ISPHOME.NET failed test DNS

    Read the article

  • PHP Form Automatic Submission

    - by sex stevens
    I need to create a PHP script that runs around the clock and re-submits a form without actually loading the form, just sending the same request over and over. I used a program called WireShark to record my packets and play them back using a packet player. This took two hours of troubleshooting and configuring. When everything finally worked, it turns out the end result was a dead end. The packets being sent did not affect anything. This code is what the script needs to resubmit: <a href="#" onclick="_('_tf11').value=15; return false;">(15)</a> <input type="image" id="btn_train" class="dynamic_img" value="ok" name="s1" src="assets/x.gif" alt="Training"> Okay, I know that here on stackoverflow you can't just ask people to do your work. The problem is that I don't even know where to start here. So please at least give me a direction, or a function name or a lead on how to be able to submit this form. Then I'll write a program and you guys can help me finish it if I will need help. here is what I made: The program: <?php //create array of data to be posted $post_data['tf[11]'] = '10000'; $post_data['s1'] = 'ok'; //traverse array and prepare data for posting (key1=value1) foreach ( $post_data as $key => $value) { $post_items[] = $key . '=' . $value; } //create the final string to be posted using implode() $post_string = implode ('&', $post_items); //create cURL connection $curl_connection = curl_init('http://crusadertrav.com/build.php?id=33'); //set options curl_setopt($curl_connection, CURLOPT_CONNECTTIMEOUT, 30); curl_setopt($curl_connection, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, true); curl_setopt($curl_connection, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl_connection, CURLOPT_FOLLOWLOCATION, 1); //set data to be posted curl_setopt($curl_connection, CURLOPT_POSTFIELDS, $post_string); //perform our request $result = curl_exec($curl_connection); //show information regarding the request print_r(curl_getinfo($curl_connection)); echo curl_errno($curl_connection) . '-' . curl_error($curl_connection); //close the connection curl_close($curl_connection); ?> The forms: <input type="text" class="text" id="_tf11" name="tf[11]" value="0" maxlength="4"> <input type="image" id="btn_train" class="dynamic_img" value="ok" name="s1" src="assets/x.gif" alt="Training"> The result: Array ( [url] => http://crusadertrav.com/index.php [content_type] => text/html; charset=UTF-8 [http_code] => 200 [header_size] => 895 [request_size] => 350 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 1 [total_time] => 2.781 [namelookup_time] => 0 [connect_time] => 0.532 [pretransfer_time] => 0.532 [size_upload] => 0 [size_download] => 10655 [speed_download] => 3831 [speed_upload] => 0 [download_content_length] => 0 [upload_content_length] => 0 [starttransfer_time] => 0.954 [redirect_time] => 0.906 [certinfo] => Array ( ) [primary_ip] => 5.154.88.71 [primary_port] => 80 [local_ip] => 192.168.11.52 [local_port] => 3222 [redirect_url] => ) 0-

    Read the article

  • Table Name is null in the sqlite database in android device

    - by Mahe
    I am building a simple app which stores some contacts and retrieves contacts in android phone device. I have created my own database and a table and inserting the values to the table in phone. My phone is not rooted. So I cannot access the files, but I see that values are stored in the table. And tested on a emulator also. Till here it is fine. Display all the contacts in a list by fetching data from table. This is also fine. But the problem is When I am trying to delete the record, it shows the table name is null in the logcat(not an exception), and the data is not deleted. But in emulator the data is getting deleted from table. I am not able to achieve this through phone. This is my code for deleting, public boolean onContextItemSelected(MenuItem item) { super.onContextItemSelected(item); AdapterView.AdapterContextMenuInfo info = (AdapterView.AdapterContextMenuInfo) item .getMenuInfo(); int menuItemIndex = item.getItemId(); String[] menuItems = getResources().getStringArray(R.array.menu); String menuItemName = menuItems[menuItemIndex]; String listItemName = Customers[info.position]; if (item.getTitle().toString().equalsIgnoreCase("Delete")) { Toast.makeText( context, "Selected List item is: " + listItemName + "MenuItem is: " + menuItemName, Toast.LENGTH_LONG).show(); DB = context.openOrCreateDatabase("CustomerDetails.db", MODE_PRIVATE, null); try { int pos = info.position; pos = pos + 1; Log.d("", "customers[pos]: " + Customers[info.position]); Cursor c = DB .rawQuery( "Select customer_id,first_name,last_name from CustomerInfo", null); int rowCount = c.getCount(); DB.delete(Table_name, "customer_id" + "=" + String.valueOf(pos), null); DB.close(); Log.d("", "" + String.valueOf(pos)); Toast.makeText(context, "Deleted Customer", Toast.LENGTH_LONG) .show(); // Customers[info.position]=null; getCustomers(); } catch (Exception e) { Toast.makeText(context, "Delete unsuccessfull", Toast.LENGTH_LONG).show(); } } this is my logcat, 07-02 10:12:42.976: D/Cursor(1560): Database path: CustomerDetails.db 07-02 10:12:42.976: D/Cursor(1560): Table name : null 07-02 10:12:42.984: D/Cursor(1560): Database path: CustomerDetails.db 07-02 10:12:42.984: D/Cursor(1560): Table name : null Don't know the reason why data is not being deleted. Data exists in the table. This is the specification I have given for creating the table public static String customer_id="customer_id"; public static String site_id="site_id"; public static String last_name="last_name"; public static String first_name="first_name"; public static String phone_number="phone_number"; public static String address="address"; public static String city="city"; public static String state="state"; public static String zip="zip"; public static String email_address="email_address"; public static String custlat="custlat"; public static String custlng="custlng"; public static String Table_name="CustomerInfo"; final SQLiteDatabase DB = context.openOrCreateDatabase( "CustomerDetails.db", MODE_PRIVATE, null); final String CREATE_TABLE = "create table if not exists " + Table_name + " (" + customer_id + " integer primary key autoincrement, " + first_name + " text not null, " + last_name + " text not null, " + phone_number+ " integer not null, " + address+ " text not null, " + city+ " text not null, " + state+ " text not null, " + zip+ " integer not null, " + email_address+ " text not null, " + custlat+ " double, " + custlng+ " double " +" );"; DB.execSQL(CREATE_TABLE); DB.close(); Please correct my code. I am struggling from two days. Any help is appreciated!!

    Read the article

  • C - Error with read() of a file, storage in an array, and printing output properly

    - by ns1
    I am new to C, so I am not exactly sure where my error is. However, I do know that the great portion of the issue lies either in how I am storing the doubles in the d_buffer (double) array or the way I am printing it. Specifically, my output keeps printing extremely large numbers (with around 10-12 digits before the decimal point and a trail of zeros after it. Additionally, this is an adaptation of an older program to allow for double inputs, so I only really added the two if statements (in the "read" for loop and the "printf" for loop) and the d_buffer declaration. I would appreciate any input whatsoever as I have spent several hours on this error. #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <unistd.h> #include <string.h> struct DataDescription { char fieldname[30]; char fieldtype; int fieldsize; }; /* ----------------------------------------------- eof(fd): returns 1 if file `fd' is out of data ----------------------------------------------- */ int eof(int fd) { char c; if ( read(fd, &c, 1) != 1 ) return(1); else { lseek(fd, -1, SEEK_CUR); return(0); } } void main() { FILE *fp; /* Used to access meta data */ int fd; /* Used to access user data */ /* ---------------------------------------------------------------- Variables to hold the description of the data - max 10 fields ---------------------------------------------------------------- */ struct DataDescription DataDes[10]; /* Holds data descriptions for upto 10 fields */ int n_fields; /* Actual # fields */ /* ------------------------------------------------------ Variables to hold the data - max 10 fields.... ------------------------------------------------------ */ char c_buffer[10][100]; /* For character data */ int i_buffer[10]; /* For integer data */ double d_buffer[10]; int i, j; int found; printf("Program for searching a mini database:\n"); /* ============================= Read in meta information ============================= */ fp = fopen("db-description", "r"); n_fields = 0; while ( fscanf(fp, "%s %c %d", DataDes[n_fields].fieldname, &DataDes[n_fields].fieldtype, &DataDes[n_fields].fieldsize) > 0 ) n_fields++; /* --- Prints meta information --- */ printf("\nThe database consists of these fields:\n"); for (i = 0; i < n_fields; i++) printf("Index %d: Fieldname `%s',\ttype = %c,\tsize = %d\n", i, DataDes[i].fieldname, DataDes[i].fieldtype, DataDes[i].fieldsize); printf("\n\n"); /* --- Open database file --- */ fd = open("db-data", O_RDONLY); /* --- Print content of the database file --- */ printf("\nThe database content is:\n"); while ( ! eof(fd) ) { /* ------------------ Read next record ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) read(fd, &i_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'F' ) read(fd, &d_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'C' ) read(fd, &c_buffer[j], DataDes[j].fieldsize); } double d; /* ------------------ Print it... ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) printf("%d ", i_buffer[j]); if ( DataDes[j].fieldtype == 'F' ) d = d_buffer[j]; printf("%lf ", d); if ( DataDes[j].fieldtype == 'C' ) printf("%s ", c_buffer[j]); } printf("\n"); } printf("\n"); printf("\n"); } Post edits output: 16777216 0.000000 107245694331284094976.000000 107245694331284094976.000000 Pi 33554432 107245694331284094976.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 Secret Key 50331648 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 -0.000000 -0.000000 The number E Expected Output: 3 rows of data ending with the number "e = 2.18281828" To reproduce the problem, the following two files need to be in the same directory as the lookup-data.c file: - db-data - db-description

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC

    - by shiju
     In this post, I will give an introduction to how to work on NoSQL and document database with MongoDB , NoRM and ASP.Net MVC 2. NoSQL and Document Database The NoSQL movement is getting big attention in this year and people are widely talking about document databases and NoSQL along with web application scalability. According to Wikipedia, "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage". Document databases are schema free so that you can focus on the problem domain and don't have to worry about updating the schema when your domain is evolving. This enables truly a domain driven development. One key pain point of relational database is the synchronization of database schema with your domain entities when your domain is evolving.There are lots of NoSQL implementations are available and both CouchDB and MongoDB got my attention. While evaluating both CouchDB and MongoDB, I found that CouchDB can’t perform dynamic queries and later I picked MongoDB over CouchDB. There are many .Net drivers available for MongoDB document database. MongoDB MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language. It has been developed since October 2007 by 10gen. MongoDB stores your data as binary JSON (BSON) format . MongoDB has been getting a lot of attention and you can see the some of the list of production deployments from here - http://www.mongodb.org/display/DOCS/Production+Deployments NoRM – C# driver for MongoDB NoRM is a C# driver for MongoDB with LINQ support. NoRM project is available on Github at http://github.com/atheken/NoRM. Demo with ASP.NET MVC I will show a simple demo with MongoDB, NoRM and ASP.NET MVC. To work with MongoDB and  NoRM, do the following steps Download the MongoDB databse For Windows 32 bit, download from http://downloads.mongodb.org/win32/mongodb-win32-i386-1.4.1.zip  and for Windows 64 bit, download  from http://downloads.mongodb.org/win32/mongodb-win32-x86_64-1.4.1.zip . The zip contains the mongod.exe for run the server and mongo.exe for the client Download the NorM driver for MongoDB at http://github.com/atheken/NoRM Create a directory call C:\data\db. This is the default location of MongoDB database. You can override the behavior. Run C:\Mongo\bin\mongod.exe. This will start the MongoDb server Now I am going to demonstrate how to program with MongoDb and NoRM in an ASP.NET MVC application.Let’s write a domain class public class Category {            [MongoIdentifier]public ObjectId Id { get; set; } [Required(ErrorMessage = "Name Required")][StringLength(25, ErrorMessage = "Must be less than 25 characters")]public string Name { get; set;}public string Description { get; set; }}  ObjectId is a NoRM type that represents a MongoDB ObjectId. NoRM will automatically update the Id becasue it is decorated by the MongoIdentifier attribute. The next step is to create a mongosession class. This will do the all interactions to the MongoDB. internal class MongoSession<TEntity> : IDisposable{    private readonly MongoQueryProvider provider;     public MongoSession()    {        this.provider = new MongoQueryProvider("Expense");    }     public IQueryable<TEntity> Queryable    {        get { return new MongoQuery<TEntity>(this.provider); }    }     public MongoQueryProvider Provider    {        get { return this.provider; }    }     public void Add<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Insert(item);    }     public void Dispose()    {        this.provider.Server.Dispose();     }    public void Delete<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Delete(item);    }     public void Drop<T>()    {        this.provider.DB.DropCollection(typeof(T).Name);    }     public void Save<T>(T item) where T : class,new()    {        this.provider.DB.GetCollection<T>().Save(item);                }  }    The MongoSession constrcutor will create an instance of MongoQueryProvider that supports the LINQ expression and also create a database with name "Expense". If database is exists, it will use existing database, otherwise it will create a new databse with name  "Expense". The Save method can be used for both Insert and Update operations. If the object is new one, it will create a new record and otherwise it will update the document with given ObjectId.  Let’s create ASP.NET MVC controller actions for CRUD operations for the domain class Category public class CategoryController : Controller{ //Index - Get the category listpublic ActionResult Index(){    using (var session = new MongoSession<Category>())    {        var categories = session.Queryable.AsEnumerable<Category>();        return View(categories);    }} //edit a single category[HttpGet]public ActionResult Edit(ObjectId id) {     using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == id)              .FirstOrDefault();         return View("Save",category);    } }// GET: /Category/Create[HttpGet]public ActionResult Create(){    var category = new Category();    return View("Save", category);}//insert or update a category[HttpPost]public ActionResult Save(Category category){    if (!ModelState.IsValid)    {        return View("Save", category);    }    using (var session = new MongoSession<Category>())    {        session.Save(category);        return RedirectToAction("Index");    } }//Delete category[HttpPost]public ActionResult Delete(ObjectId Id){    using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == Id)              .FirstOrDefault();        session.Delete(category);        var categories = session.Queryable.AsEnumerable<Category>();        return PartialView("CategoryList", categories);    } }        }  You can easily work on MongoDB with NoRM and can use with ASP.NET MVC applications. I have created a repository on CodePlex at http://mongomvc.codeplex.com and you can download the source code of the ASP.NET MVC application from here

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • Como Exportar Crystal Reports a Excel, Word, Rich Text, PDF ó HTML

    - by jaullo
    Cuando trabajamos con reportes siempre requerimos la funcionalidad de exportación. En crystal reports para asp.net, realizar esta tarea es sumamente sencillo. Sin embargo la pregunta más grande que salta siempre, es como realizarlo utilizando código Behind. Para poder acceder a las librerias de crystal y sus componentes, primero debemos importar los espacios de nombres: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared  CrystalDecisions.CrystalReports.Engine, nos servirá para poder manejar nuestro reportDocument y CrystalDecisions.Shared, será el medio que utilicemos para la exportación. Así que, veamos como podemos exportar nuestro informe sin tener que enviarlo a la impresora, recordemos que por defecto crystal reports ya tiene la opcion de exportar a PDF sin embargo debemos hacerlo tal como si fueramos a imprimir y que es lo que evitaremos acá. Colocamos un botón en nuestra pagina asp Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <asp:Button ID="btntopdf" runat="server" Text="Exportar a PDF" /> Y en nuestro boton deberemos ejecutar la siguiente rutina: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Protected Sub btntodpf_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btntopdf.Click          'Cargar reporte. Enlazando a la fuente de datos.        LoadReporte()          'Mas adelante veremos que estas lineas las podemos obviar        Response.Buffer = False        Response.Clear()  'ClearContent, ClearHeaders          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")       End Sub LoadReport, es el encargado de llenar nuestro crystal con la fuente de datos. Está fue la primer forma de exporta nuestro crystal reports, pero no es la única, así que vamos a ver otra forma en la cual utilizaremos el metodo v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ExportToHttpResponse  Para este metodo, nuestro código en el botón cambia relativamente, pero antes de ello, daremos un repaso a los metodos utilizados. Nuestro primer parametro FormatType es un valor de tipo ExportFormatType, que puede corresponder a cualquiera de los metodos que enumeramos a continuación: CrystalReport: El formato al cual se exporta es de Tipo CrystalReport. Excel: El formato al cual se exporta es de tipo Excel ExcelRecord: El formato al cual se exporta es de Tipo Excel Record. NoFormat: No se ha especificado un formato de exportación. PortableDocFormat: El formato al cual se exporta es de Tipo PDF.  No voy a enumerar todos, pues me imagino que ya sabrán la idea de cada uno de los formatos, los numerados arriba son los mas importantes. Nuestro segundo parametro el objeto response nos permite adozar el archivo. Y por último, nuestro tercer parametro, definirá si debe ir como un objeto adjunto o no. Si lo colocamos en TRUE, estaremos enviando nuestro archivo como parametro, esto hará que no necesitemos las siguientes líneas de código: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Response.Buffer = False Response.Clear()   Con esto realizado, ya contamos con la posibilidad de enviar el archivo directamente al cliente.   Ahora si, veamos cuanto se ha reducido nuestro código: Unicamente nos quedan dos líneas de código en nuestro botón Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}        'Cargar reporte. Enlazando a la fuente de datos.        LoadReport()          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")   Para finalizar, nada mas decir que espero esto les sea de ayuda y por supuesto,  que les facilite la vida con el uso de crystal reports.

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Pluralsight Meet the Author Podcast on Structuring JavaScript Code

    - by dwahlin
    I had the opportunity to talk with Fritz Onion from Pluralsight about one of my recent courses titled Structuring JavaScript Code for one of their Meet the Author podcasts. We talked about why JavaScript patterns are important for building more re-useable and maintainable apps, pros and cons of different patterns, and how to go about picking a pattern as a project is started. The course provides a solid walk-through of converting what I call “Function Spaghetti Code” into more modular code that’s easier to maintain, more re-useable, and less susceptible to naming conflicts. Patterns covered in the course include the Prototype Pattern, Revealing Module Pattern, and Revealing Prototype Pattern along with several other tips and techniques that can be used. Meet the Author:  Dan Wahlin on Structuring JavaScript Code   The transcript from the podcast is shown below: [Fritz]  Hello, this is Fritz Onion with another Pluralsight author interview. Today we’re talking with Dan Wahlin about his new course, Structuring JavaScript Code. Hi, Dan, it’s good to have you with us today. [Dan]  Thanks for having me, Fritz. [Fritz]  So, Dan, your new course, which came out in December of 2011 called Structuring JavaScript Code, goes into several patterns of usage in JavaScript as well as ways of organizing your code and what struck me about it was all the different techniques you described for encapsulating your code. I was wondering if you could give us just a little insight into what your motivation was for creating this course and sort of why you decided to write it and record it. [Dan]  Sure. So, I got started with JavaScript back in the mid 90s. In fact, back in the days when browsers that most people haven’t heard of were out and we had JavaScript but it wasn’t great. I was on a project in the late 90s that was heavy, heavy JavaScript and we pretty much did what I call in the course function spaghetti code where you just have function after function, there’s no rhyme or reason to how those functions are structured, they just kind of flow and it’s a little bit hard to do maintenance on it, you really don’t get a lot of reuse as far as from an object perspective. And so coming from an object-oriented background in JAVA and C#, I wanted to put something together that highlighted kind of the new way if you will of writing JavaScript because most people start out just writing functions and there’s nothing with that, it works, but it’s definitely not a real reusable solution. So the course is really all about how to move from just kind of function after function after function to the world of more encapsulated code and more reusable and hopefully better maintenance in the process. [Fritz]  So I am sure a lot of people have had similar experiences with their JavaScript code and will be looking forward to seeing what types of patterns you’ve put forth. Now, a couple I noticed in your course one is you start off with the prototype pattern. Do you want to describe sort of what problem that solves and how you go about using it within JavaScript? [Dan]  Sure. So, the patterns that are covered such as the prototype pattern and the revealing module pattern just as two examples, you know, show these kind of three things that I harp on throughout the course of encapsulation, better maintenance, reuse, those types of things. The prototype pattern specifically though has a couple kind of pros over some of the other patterns and that is the ability to extend your code without touching source code and what I mean by that is let’s say you’re writing a library that you know either other teammates or other people just out there on the Internet in general are going to be using. With the prototype pattern, you can actually write your code in such a way that we’re leveraging the JavaScript property and by doing that now you can extend my code that I wrote without touching my source code script or you can even override my code and perform some new functionality. Again, without touching my code.  And so you get kind of the benefit of the almost like inheritance or overriding in object oriented languages with this prototype pattern and it makes it kind of attractive that way definitely from a maintenance standpoint because, you know, you don’t want to modify a script I wrote because I might roll out version 2 and now you’d have to track where you change things and it gets a little tricky. So with this you just override those pieces or extend them and get that functionality and that’s kind of some of the benefits that that pattern offers out of the box. [Fritz]  And then the revealing module pattern, how does that differ from the prototype pattern and what problem does that solve differently? [Dan]  Yeah, so the prototype pattern and there’s another one that’s kind of really closely lined with revealing module pattern called the revealing prototype pattern and it also uses the prototype key word but it’s very similar to the one you just asked about the revealing module pattern. [Fritz]  Okay. [Dan]  This is a really popular one out there. In fact, we did a project for Microsoft that was very, very heavy JavaScript. It was an HMTL5 jQuery type app and we use this pattern for most of the structure if you will for the JavaScript code and what it does in a nutshell is allows you to get that encapsulation so you have really a single function wrapper that wraps all your other child functions but it gives you the ability to do public versus private members and this is kind of a sort of debate out there on the web. Some people feel that all JavaScript code should just be directly accessible and others kind of like to be able to hide their, truly their private stuff and a lot of people do that. You just put an underscore in front of your field or your variable name or your function name and that kind of is the defacto way to say hey, this is private. With the revealing module pattern you can do the equivalent of what objective oriented languages do and actually have private members that you literally can’t get to as an external consumer of the JavaScript code and then you can expose only those members that you want to be public. Now, you don’t get the benefit though of the prototype feature, which is I can’t easily extend the revealing module pattern type code if you don’t like something I’m doing, chances are you’re probably going to have to tweak my code to fix that because we’re not leveraging prototyping but in situations where you’re writing apps that are very specific to a given target app, you know, it’s not a library, it’s not going to be used in other apps all over the place, it’s a pattern I actually like a lot, it’s very simple to get going and then if you do like that public/private feature, it’s available to you. [Fritz]  Yeah, that’s interesting. So it’s almost, you can either go private by convention just by using a standard naming convention or you can actually enforce it by using the prototype pattern. [Dan]  Yeah, that’s exactly right. [Fritz]  So one of the things that I know I run across in JavaScript and I’m curious to get your take on is we do have all these different techniques of encapsulation and each one is really quite different when you’re using closures versus simply, you know, referencing member variables and adding them to your objects that the syntax changes with each pattern and the usage changes. So what would you recommend for people starting out in a brand new JavaScript project? Should they all sort of decide beforehand on what patterns they’re going to stick to or do you change it based on what part of the library you’re working on? I know that’s one of the points of confusion in this space. [Dan]  Yeah, it’s a great question. In fact, I just had a company ask me about that. So which one do I pick and, of course, there’s not one answer fits all. [Fritz]  Right. [Dan]  So it really depends what you just said is absolutely in my opinion correct, which is I think as a, especially if you’re on a team or even if you’re just an individual a team of one, you should go through and pick out which pattern for this particular project you think is best. Now if it were me, here’s kind of the way I think of it. If I were writing a let’s say base library that several web apps are going to use or even one, but I know that there’s going to be some pieces that I’m not really sure on right now as I’m writing I and I know people might want to hook in that and have some better extension points, then I would look at either the prototype pattern or the revealing prototype. Now, really just a real quick summation between the two the revealing prototype also gives you that public/private stuff like the revealing module pattern does whereas the prototype pattern does not but both of the prototype patterns do give you the benefit of that extension or that hook capability. So, if I were writing a library that I need people to override things or I’m not even sure what I need them to override, I want them to have that option, I’d probably pick a prototype, one of the prototype patterns. If I’m writing some code that is very unique to the app and it’s kind of a one off for this app which is what I think a lot of people are kind of in that mode as writing custom apps for customers, then my personal preference is the revealing module pattern you could always go with the module pattern as well which is very close but I think the revealing module patterns a little bit cleaner and we go through that in the course and explain kind of the syntax there and the differences. [Fritz]  Great, that makes a lot of sense. [Fritz]  I appreciate you taking the time, Dan, and I hope everyone takes a chance to look at your course and sort of make these decisions for themselves in their next JavaScript project. Dan’s course is, Structuring JavaScript Code and it’s available now in the Pluralsight Library. So, thank you very much, Dan. [Dan]  Thanks for having me again.

    Read the article

  • CodePlex Daily Summary for Saturday, February 20, 2010

    CodePlex Daily Summary for Saturday, February 20, 2010New ProjectsBerkeliumDotNet: BerkeliumDotNet is a .NET wrapper for the Berkelium library written in C++/CLI. Berkelium provides off-screen browser rendering via Google's Chromi...BoxBinary Descriptive WebCacheManager Framework: Allows you to take a simple, different, and more effective method of caching items in ASP.NET. The developer defines descriptive categories to deco...CHS Extranet: CHS Extranet Project, working with the RM Community to build the new RM Easylink type applicationDbModeller: Generates one class having properties with the basic C# types aimed to serve as a business object by using reflection from the passed objects insta...Dice.NET: Dice.NET is a simple dice roller for those cases where you just kinda forgot your real dices. It's very simple in setup/use.EBI App with the SQL Server CE and websync: EBI App with the SQL Server CE and SQL Server DEV with Merge Replication(Web Synchronization) ere we are trying to develop an application which yo...Family Tree Analyzer: This project with be a c# project which aims to allow users to import their GEDCOM files and produce various data analysis reports such as a list o...Go! Embedded Device Builder: Go! is a complete software engineering environment for the creation of embedded Linux devices. It enables you to engineer, design, develop, build, ...HiddenWordsReadingPlan: HiddenWordsReadingPlanHtml to OpenXml: A library to convert simple or advanced html to plain OpenXml document.Jeffrey Palermo's shared source: This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference sessions.Krypton Palette Selectors: A small C# control library that allows for simplified palette selection and management. It makes use of and relies on Component Factory's excellen...OCInject: A DI container on a diet. This is a basic DI container that lives in your project not an external assembly with support for auto generated delegat...Photo Organiser: A small utility to sort photos into a new file structure based on date held in their XMP or EXIF tags (YYYY/MM/DD/hhmmss.jpg). Developed in C# / WPF.QPAPrintLib: Print every document by its recommended programmReusable Library: A collection of reusable abstractions for enterprise application developer.Runtime Intelligence Data Visualizer: WPF application used to visualize Runtime Intelligence data using the Data Query Service from the Runtime Intelligence Endpoint Starter Kit.ScreenRec: ScreenRec is program for record your desktop and save to images or save one picture.Silverlight Internet Desktop Application Guidance: SLIDA (Silverlight Internet Desktop Applications) provides process guidance for developers creating Silverlight applications that run exclusively o...WSUS Web Administration: Web Application to remotely manage WSUSNew Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.7.0 Stable: Bug Solved : Test-Path-Writable fails on root of system drive on Windows 7. Therefore the function now accepts an optional parameter to specify if ...aqq: sec 1.02: Projeto SEC - Sistema economico Comercial - em Visual FoxPro 9.0 OpenSource ou seja gratis e com fontes, licença GNU/GPL para maiores informações e...ASP.NET MVC Attribute Based Route Mapper: Attribute Based Routes v0.2: ASP.NET MVC Attribute Based Route MapperBoxBinary Descriptive WebCacheManager Framework: Initial release: Initial assembly release for anyone wanting the files referenced in my talk at Umbraco's 5th Birthday London meetup 16/Feb/2010 The code is fairly...Build Version Increment Add-In Visual Studio: Build Version Increment v2.2 Beta: 2.2.10050.1548Added support for custom increment schemes via pluginsBuild Version Increment Add-In Visual Studio: BuildVersionIncrement v2.1: 2.1.10050.1458Fix: Localization issues Feature: Unmanaged C support Feature: Multi-Monitor support Feature: Global/Default settings Fix: De...CHS Extranet: Beta 2.3: Beta 2.3 Release Change Log: Fixed the update my details not updating the department/form Tried to fix the issue when the ampersand (&) is in t...Cover Creator: CoverCreator 1.2.2: Resolved BUG: If there is only one CD entry in freedb.org application do nothing. Instalation instructions Just unzip CoverCreator and run CoverCr...Employee Scheduler: Employee Scheduler 2.3: Extract the files to a directory and run Lab Hours.exe. Add an employee. Double click an employee to modify their times. Please contact me through ...EnOceanNet: EnOceanNet v1.11: Recompiled for .NET Framework 4 RCFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 beta 4 Released: Hi, This release contains fix for the following bugs: * DataBinding was not working as expected with RIA services. * DataSeries visual wa...Html to OpenXml: HtmlToOpenXml 0.1 Beta: This is a beta version for testing purpose.Jeffrey Palermo's shared source: Web Forms front controller: This code goes along with my blog post about adding code that executes before your web form http://jeffreypalermo.com/blog/add-post-backs-to-mvc-nd...Krypton Palette Selectors: Initial Release: The initial release. Contains only the KryptonPaletteDropButton control.LaunchMeNot: LaunchMeNot 1.10: Lots has been added in this release. Feel free, however, to suggest features you'd like on the forums. Changes in LaunchMeNot 1.10 (19th February ...Magellan: Magellan 1.1.36820.4796 Stable: This is a stable release. It contains a fix for a bug whereby the content of zones in a shared layout couldn't use element bindings (due to name sc...Magellan: Magellan 1.1.36883.4800 Stable: This release includes a couple of major changes: A new Forms object model. See this page for details. Magellan objects are now part of the defau...MAISGestão: LayerDiagram: LayerDiagramMatrix3DEx: Matrix3DEx 1.0.2.0: Fixes the SwapHandedness method. This release includes factory methods for all common transformation matrices like rotation, scaling, translation, ...MDownloader: MDownloader-0.15.1.55880: Fixed bugs.NewLineReplacer: QPANewLineReplacer 1.1: Replace letter fast and easy in great textfilesOAuthLib: OAuthLib (1.5.0.1): Difference between 1.5.0.0 is just version number.OCInject: First Release: First ReleasePhoto Organiser: Installer alpha release: First release - contains known bugs, but works for the most part.Pinger: Pinger-1.0.0.0 Source: The Latest and First Source CodePinger: Pinger-1.0.0.2 Binary: Hi, This version can! work on Older versions of windows than 7 but i haven't test it! tnxPinger: Pinger-1.0.0.2 Source: Hi, It's the raw source!Reusable Library: v1.0: A collection of reusable abstractions for enterprise application developer.ScreenRec: Version 1: One version of this programSense/Net Enterprise Portal & ECMS: SenseNet 6.0 Beta 5: Sense/Net 6.0 Beta 5 Release Notes We are proud to finally present you Sense/Net 6.0 Beta 5, a major feature overhaul over beta43, and hopefully th...Silverlight Internet Desktop Application Guidance: v1: Project templates for Silverlight IDA and Silverlight Navigation IDA.SLAM! SharePoint List Association Manager: SLAM v1.3: The SharePoint List Association Manager is a platform for managing lists in SharePoint in a relational manner. The SLAM Hierarchy Extension works ...SQL Server PowerShell Extensions: 2.0.2 Production: Release 2.0.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 8 modules with 133 advanced functions, 2 cmdlets and 7 sc...StoryQ: StoryQ 2.0.2 Library and Converter UI: Fixes: 16086 This release includes the following files: StoryQ.dll - the actual StoryQ library for you to reference StoryQ.xml - the xmldoc for ...Text to HTML: 0.4.0 beta: Cambios de la versión:Inclusión de los idiomas castellano, inglés y francés. Adición de una ventana de configuración. Carga dinámica de variabl...thor: Version 1.1: What's New in Version 1.1Specify whether or not to display the subject of appointments on a calendar Specify whether or not to use a booking agen...TweeVo: Tweet What Your TiVo Is Recording: TweeVo v1.0: TweeVo v1.0VCC: Latest build, v2.1.30219.0: Automatic drop of latest buildVFPnfe: Arquivo xml gerado manualmente: Segue um aquivo que gera o xml para NF-e de forma manual, estou trabalhando na versão 1.1 deste projeto, aguarde, enquanto isso baixe outro projeto...Windows Double Explorer: WDE v0.3.7: -optimization -locked tabs can be reset to locked directory (single & multi) -folder drag drop to tabcontrol creates new tab -splash screen -direcl...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.5: Visual Studio 2008 and RC 2010 are now supported. Different profiles can now be used to compile the shader with. ChangesVisual Studio RC 2010 is ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrSharpyBlogEngine.NETSharePoint ContribjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFluent Ribbon Control Suite

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Friday, March 19, 2010

    CodePlex Daily Summary for Friday, March 19, 2010New Projects[Tool] Vczh Visual Studio UnitTest Coverage Analyzer: Analyzing Visual Studio Unittest Coverage Exported XML filecrudwork is a library of reuseable classes for developing .NET applications: crudwork is a collection of reuseable .NET classes and features. If you searched for StpLibrary and landed here, you're in the right place. Origi...CWU Animated AVL Tree Tutorial: This is a silverlight demo of a self-balancing AVL tree. On the original team were CWU undergraduates Eric Brown, Barend Venter, Nick Rushton, Arry...DotNetNuke® Skin Modern: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by Salar Golestanian of SalarO. The skin utilizes both the telerik...DotNetNuke® Skin Monster: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Jon Edwards of SlumtownHero.co.za. This package uses totally tab...DotNetNuke® Skin Synapse: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Exionyte Solutions. This package features 2 colors with 4...earthworm: Earthworm is a pet project intended as a repository of data access logic, including some ORM, state management and bridging the gap between connect...ema: EMA is a place for collaborative effort to implement a PowerGrid game engine. For more info on PowerGrid the board game see: http://www.boardgamege...Extended SharePoint Web Parts: Extending capabilities of existing SharePoint 2007 Web Parts by inheriting and alterFreedomCraft: Craft development siteG.B SecondLife Sculpter: This is a Sculptor for "secondlife"InfoPath Error Viewer: InfoPath Error Viewer provides an intuitive list to show all errors in the entire InfoPath form. You'll no longer have to find the validation error...LEET (LEET Enhances Exploratory Testing): LEET is a capture-replay tool based on Microsoft’s User Interface Automation Framework. It is targeted at agile teams, and provides support for us...Linq To Entity: Linq,Linq to Entity,EntityMACFBTest: This is a test for a Facebook application.MetaProperties: MetaProperties helps you to create event driven architectures in .NET. It saves you time and it helps you avoid mistakes. It's compatible with WPF ...ownztec web: projeto da ownztec.comParallel Programming Guide: Content for the latest patterns & practices book on design patterns for parallel programming. Downloadable book outline and draft chapters as well ...Perseus - Sistema de Matrícula On-Line: Sistema de matrícula desenvolvido pelo 5º período de Desenvolvimento Web da FACECLA.Project Tru Tiên: Project EL tru tiên, ZhuxianProSysPlus.Net Framework: How do I get the ease and efficiency of my work in VFP (R.I.P. 2010)? The answer is here: the ProSysPlus.Net Framework. Why is it open source? Wh...Quick Anime Renamer: Originally included with AniPlayer X, Quick Anime Renamer easily renames your anime files into a "cleaner" format so you wont get retinal detachment.Simple XNA Button: This is a project of a helper for instancing Simple Buttons in XNA with a ButtonPanel. Its got various features like. Load a Panel from a Plain Tex...SteelVersion - Monitor your .NET Application versioning: SteelVersion helps you to find and store versioning information about .NET assemblies ("Explorer" mode). It also makes it easier to continuously ch...Stellar Results: Astronomical Tracking System for IUPUI CSCI506 - Fall 2007, Team2TheHunterGetsTheDeer: first AIwandal: wandalWeb App Data Architect's CodeCAN: Contains different types of code samples to explore different types of technical solutions/patterns from an architect's point of view.Yet Another GPS: Yet another GPS tracker is a very powerful GPS track application for Windows MobileNew ReleasesASP.Net Client Dependency Framework: v1.0 RC1: ASP.Net Client Dependency has progressed to release candidate 1. With the community feedback and bug reports we've been able to make some great upd...C# FTP Library: FTPLib v1.0.1.1: This release has a couple of small bug fixes as well as the new abilities to specify a port to connect to and to create a new directory with the Cr...crudwork is a library of reuseable classes for developing .NET applications: crudwork 2.2.0.1: crudwork 2.2.0.1 (initial version)DotNetNuke® Skin Modern: Modern Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by Salar Golestanian of SalarO. The skin utilizes both the telerik...DotNetNuke® Skinning Extensions: Nav Menu Demo Skins: This very basic skin demonstrates: 1. How to force NAV menu to generate an unordered list menu 2. The creation of a sub menu, both horizontal and ...DotNetNuke® XML: 04.03.05: XML/XSL Module 04.03.05 Release Candidate This is a maintainace release. Full Quallified Namespace avoids conflicts with Namespaces used by Teler...eCommerce by Onex Community Edition: Installer of eCommerce by Onex Community 1.0: Installer of eCommerce by Onex Community 1.0 Last changes: Added integration with Paypal Corrected of adding photos and attachments to products ...eCommerce by Onex Community Edition: Source code of eCommerce by Onex Community 1.0: Changes in version 1.0: Added integration with Paypal Corrected of adding photos and attachments to products Fixed problem with cancellation of...Employee Info Starter Kit: v2.2.0 (Visual Studio 2005-2008): This is a starter kit, which includes very simple user requirements, where we can create, read, update and delete (CRUD) the employee info of a com...Employee Info Starter Kit: v4.0.0.alpha (Visual Studio 2010): Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and d...Encrypted Notes: Encrypted Notes 1.4: This is the latest version of Encrypted Notes (1.4). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...Extended SharePoint Web Parts: ContentQueryAdvanced: This .wsp file contains a single web part ContentQueryAdvanced. This web part inherits from ContentQuery web part and adds a ToolPart field for a ...Extended SharePoint Web Parts: Source Code: Zip file includes all the source code used to extend Content Query Web Part, adding a Tool Part field to insert a CAML query/filter/sortFacebook Developer Toolkit: Version 3.02: Updated copyright. No new functionality. Version 3.1 in the works.fleXdoc: template-based server-side document generator (docx): fleXdoc 1.0 (final): fleXdoc consists of a webservice and a (test)client for the service. Make sure you also download the testclient: you can use it to test the install...InfoPath Error Viewer: InfoPath Error Viewer 1.0: This is an intial version of this tool. You can: 1. View all errors in a list. 2. Locate to a binding control of an error field. 3. See the detai...LEET (LEET Enhances Exploratory Testing): LEET Alpha: The first public release of LEET includes the ability to record tests from running GUIs, assist in writing tests manually from a running GUI, edit ...Linq To Entity: Linq to Entity: The Entity Framework enables developers to work with data in the form of domain-specific objects and properties, such as customers and customer add...MDownloader: MDownloader-0.15.8.56699: Fixed peformance and memory usage. Fixed Letitbit provider. Added detecting IMDB, NFO, TV.com... links in RSS Monitor. Supported password len...MetaProperties: MetaProperties 1.0.0.0: This is a multi-targeted release of MetaProperties for the desktop and Silverlight versions of the .NET framework. The desktop version is fully ...Nito.KitchenSink: Version 2: Added a cancelable Stream.CopyTo. Depends on Nito.Linq 0.2. Please report any issues via the Issue Tracker.Project Server 2007 Timesheet AutoStatus Plus: AutoStatusPlus 1.0.1.0: AutoStatusPlus 1.0.1.0 Supported Systems x86 and x64 Project Server 2007 deployments with or without MOSS 2007 Recommended Patchlevels WSS 3.0: ...Project Tru Tiên: Elements-test V1: Mô tả Bản elements.data - có full ID của bản Elemens.data Tru tiên 2 VIệt Nam (V37) - có full ID của bản Elements.data server offline tru tiên (hiệ...Quick Anime Renamer: Quick Anime Renamer v0.1: AniPlayer X v1.4.5 - started 3/18/2010Initial Release!QuickieB2B: Quickie v1.0b: QuickieB2B - made for DEV4FUN competition organized by Microsoft CroatiaSilverlight 3.0 Advanced ToolTipService: Advanced ToolTipService v2.0.2: This release is compiled against the Silverlight 3.0 runtime. A demonstration on how to set the ToolTip content to a property of the DataContext o...Simple XNA Button: XNA Button 1.0: The Main Project. this uses XNA 3.0 but it can be build with lower versions of XNA Framework. This was made using Visual Studio 2008.StoryQ: StoryQ 2.0.3 Library and Converter UI: New features in this release: Tagging and a tag-capable rich html report. The code generator is capable of generating entire classes This relea...The Silverlight Hyper Video Player [http://slhvp.com]: Version 1.0: Version 1.0VCC: Latest build, v2.1.30318.0: Automatic drop of latest buildWord Index extracts words or sentences from Word document according to patterns: Word Index 1.0.1.0 (For Word 2007 and Word 2003): Word Index for Word 2007 & 2003 : WordIndex.msi (Win-Installer Setup for Word Index) Source code : wordindex.codeplex.comV1.0.1.0.zip : (Source co...Yet Another GPS: YAGPS-Alfa.1: Yet another GPS tracker is a very powerful GPS track application for Windows MobileMost Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQOpen Data App Framework (ODAF)patterns & practices – Enterprise LibraryBlogEngine.NETPHPExcelNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • CodePlex Daily Summary for Wednesday, February 23, 2011

    CodePlex Daily Summary for Wednesday, February 23, 2011Popular ReleasesSQL Server CLR Function for Address Correction and Geocoding: Release 2.1: Adds support for the User Key argument in Process function calls.ClosedXML - The easy way to OpenXML: ClosedXML 0.45.2: New on this release: 1) Added data validation. See Data Validation 2) Deleting or clearing cells deletes the hyperlinks too. New on v0.45.1 1) Fixed issues 6237, 6240 New on v0.45.2 1) Fixed issues 6257, 6266 New Examples Data ValidationCrystalbyte Equinox (LINQ to IMAP): Equinox Alpha Release v0.3.0.0: Fixed bugsFixed issue introduced in last release; No connection could be established since the client did not read the welcome message during the initial connection attempt. Contact names are now being properly parsed into the envelope Introduced featuresImplemented SMTP support, the client is contained inside the Crystalbyte.Equinox.Smtp.dll which was added to the current release.OMEGA CMS: OMEGA CMA - Alpha 0.2: A few fixes for OMEGA Framework (DLL) A few tweeks for OMEGA CMSJSLint for Visual Studio 2010: 1.2.4: Bug Fix release.Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Chiave File Encryption: Chiave 0.9: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Feedbacks are Welcome!....Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...PowerGUI Visual Studio Extension: PowerGUI VSX 1.3.2: New FeaturesPowerGUI Console Tool Window PowerShell Project Type PowerGUI 2.4 SupportMiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...New Projects.Net Dating & Social Software Suite: A free, open source dating and social software suite. Dot Net Dating Suite uses the latest features in .Net combined with experienced developers efforts. This allows us to deliver professional public SEO websites with the support of an enterprise enabled administration suite.AllegroSharp: Biblioteka zapewniajaca obsluge serwisu aukcyjnego Allegro.pl w oparciu o udostepniona publicznie usluge Allegro Web API.asdfasdfasdf: asdfasdfasdfauto: auto siteAWS Monitor: A web app utilizing the Amazon Web Services API with a focus on browsing through and analyzing data graphically, mostly from CloudWatch - the API providing metrics about the usage of all the Amazon Web Services such as EC2 (Cloud Computing) and ELB (Elastic Load Balancing).Calculation of FEM using DirectX Libraries: Calculation of FEM using DirectX LibrariesCART: System CART (Creative Application to Remedy Traffics) poprawi przepustowosc infrastruktury drogowej i plynnosci ruchu pojazdów w aglomeracjach miejskich. Chaining Assertion for MSTest: Chaining Assertion for MSTest. Simpleness Assert Extension Method on Object. This provides only one .cs file.CodeFirst Membership Provider: Custom Membership Provider based on SimpleMembershipProvider using Entity Framework Code-First, intended for use in ASP.NET MVC 3. Thanks to ASP.NET Web Pages team for building a simple membership provider :PFourSquareSharp: Simple OAuth2 Implementation of FourSquare API in .NETHelix Engine: The Helix Engine is an Isometric Rendering Engine for SilverlightLighthouse - Versatile Unit Test Runner for Silverlight: Lighthouse provides you with set of simple but powerful tools to run Silverlight Unit Tests from Command Line, Windows Test Runner Application and from Resharper plugin for Visual Studio.MISCE: Designs for a Minimal Instruction Set Computer for Education along with a simulator/compiler written in Excel. A computer "from first principles" that utilises a minimal instruction set and can be built for demonstration purposes as part of a technology or computing course. Oh My Log: Oh My Log inventorises eventlog files from Windows Servers in a network. Clean and simple sysadmin tool. OSIS Interop Tests: These are the tests created for use by the OSIS working group (http://osis.idcommons.net) to test interoperability features of user-centric solutions during interop events.Program Options: Parse command line optionsSearchable Property Updater for Microsoft Dynamics CRM 2011: Searchable Property Updater makes it easier for Microsoft Dynamics CRM 2011 customizers to bulk change the "Active for advanced find" property of entities attributesSilverlight Bolo: silverlight bolo cloneSimulacia a vizualizacia vlasov: simulacia a vizualizacia vlasov pomocou GPUSmartkernel: Smartkernel:Framework,Example,Platform,Tool,AppSusuCMS: SusuCMSSwimlanes for Scrum: Swimlanes Taskboard for Scrum Team Projects with TFS setup using SfTS (Scrum for Team System) V3 templateteamdoer: These are the actual source codes that power teamdoer.com - a simple collaborative task/project management software apptiencd: Tiencd projectVG Current Item Display Web Part: VG Current Item Display Web Part allows to display current SharePoint item metadata. For example if you put it on to the Web Part page or list item form it will display the custom view of the current item properties. Output is easily customized with XSLT file.Virtual Interactive Shopper: Addin for SeeMe Rehabilitation System. More information about the SeeMe system can be found at http://www.brontesprocessing.com/health/SeeMeVisual Studio Strategy Manager: Provide a way to create code generation strategies and to manage them in Visual Studio 2010. Strategies can interact with Visual Studio events like document events (saved, closed, opened), building event or DSL Tools events (model element created, deleted, modified..).VMarket: <project name>VMarket</project name> <programming language>asp.net</programming language> <project description> VMarket is virtual market systems which allow users to act as seller or buyer and manage their property online </project description>webclerk: webclerk's project based on microsoft techWebTest: Just Test!Work efficiency (WE) ????????: Work efficiency ???????? ????: ???? ????zrift: nothing here, move along

    Read the article

  • CodePlex Daily Summary for Thursday, October 27, 2011

    CodePlex Daily Summary for Thursday, October 27, 2011Popular ReleasesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...Facebook C# SDK: 5.3.1: This is a BETA release which adds new features and bug fixes to v5.2.1. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ added support for early preview for .NET 4.5 added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added new CS-WinForms-AsyncAwait.sln sample demonstrating the use of async/await, upload progress report using IProgress<T> and cancellation support Query/QueryAsync methods uses graph api instead...SQL Backup Helper: SQL Backup Helper v1.0: Version 1.0 Changes Description added to settings table Automatic LOG files truncation added to BACKUP stored procedure Only database in status ONLINE will be backed upFlowton: Release 0.2: This release is the first official release of Flowton along with Source. Printpreview/Print is enabled with minor bug fixes from 0.1 alpha release locally.MySemanticSearch Sample: MySemanticSearch Installer (CTP3): Note: This release of the MySemanticSearch Sample works with SQL Server 2012 CTP3. Installation InstructionsDownload this self-extracting archive to your computer Execute the self-extracting archive Accept the licensing agreement Choose a target directory on your computer and extract the files Open Windows PowerShell command prompt with elevated priveleges Execute the following command: Set-ExecutionPolicy Unrestricted Close the Windows PowerShell command prompt Run C:\MySema...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Fixed: Issue with partial thrust Client handling of conditional attributes Bug in TreeView node moves that sometimes were not reflected on the server An issue in the Mvc3 Nuget package that wasn't able to uninstall properly ...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.WiX Toolset: WiX v3.6 Beta: First beta release of WiX v3.6. The primary focus is on Burn but there are also many small bug fixes to the core toolset. For more information see: http://robmensching.com/blog/posts/2011/10/24/WiX-v3.6-Beta-releasedNew Projects5by5by4 Tic Tac Toe Project: 5by5by4 Tic Tac Toe project for CS 3420.ASP.Net Membership provider for MongoDb: A role and membership provider for ASP.Net with MongoDb as the database. Makes use of the Norm Linq provider for MongoDb.Bazookabird TFS Dashboard: Just a simple TFS dashboard. Show build results, tfs queries of your choice and display availability of build machines. Only a feature-lacking proof of concept yet, will hopefully be useful in the end. BestWebApp: Web service and web application.Code Made Simple: A group of projects to make coding life simple.DefenseXna: ??xna??Digibib.NET: Digibib.NET ist eine Portierung der Lesesoftware Digibux für die Digitale Bibliothek der Directmedia Publishing.FaceComparerDistributed: project of face compare distributed versionFMSSolution: FMSISO Analyzer: ISO Analyzer is a tool that makes it easier to analyze ISO 8583 financial transactions and also provides a platform to create a host simulator, capable of receiving requests and sending back the responses. It’s a WinForms application and it’s developed using C#.ITU Project: ITU Projekt - Víc než len klávesové skratky (navigace mezi bežicími aplikacemi)Maintenance Province Data: Help people find your project. Write a concise, reader-focused summary. Example: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.m?ng chia s? công vi?c: m?ng chia s? công vi?cMetroTask: MetroTask, An example Metro based application using C#MyHomeFinance: Helps to add and keep a record of financeMySemanticSearch Sample: MySemanticSearch is a sample content management application that demonstrates semantic search capabilities introduced in SQL Server 2012. MySemanticSearch allows you to visualize tag clouds for content stored in FileTables and find similar content using semantic search.NetBlocks: This project is an implementation of the Unit-of-Work and Repository patterns using Entity Framework 4.1 and Unity. The project also includes code that can be used to initialize an application’s run-time environment from a set of components. The project includes example components for typed configuration settings, caching and a factory component based on Unity. Also included is an example of how to represent a database command with a C# class that transforms the results to typed objects.Online shopping website in ASP.NET- Open Source Project: asp.net,C#,shopping cart,college project,visual studio 2010,visual web developer 2010Pak Master: Pak Explorer for the fourth coming four.Pizza Service: A mvc3 project which aims to host both a backend webservice and a frontend page for ordering pizza and managing orders, customers and provide a drivermapScadaEveryWhere: ScadaEveryWhereSilverTwitterSearch: SilverTwitterSearch is a Siliverlight library for the Twitter Search API.Snst Salix: Salix is codename for custom solution of time management and task assignment software.SolutionManagementKit: SolutionManagementKit This product will try to combine several monitoring / support tools available right now. the primary focus will be on (T) SQL / MDX parsing for SQL 2008 R2 SSAS 2008 R2 as well as cube maintance Development in C# / VB .NETTarget: Blank Orchard Module: If enabled, outgoing links open in new window (just like with the deprecated target="_blank" attribute)WPPersonality: Psychological tests for windows phoneXrmLibrary: A base library to be used for rapid integration with one or more Microsoft Dynamics CRM 2011 environments. The XrmLibrary contains thread-safe singleton implementations of both the CRM IOrganizationService (1 or many instances) and a tracer/logger that utilizes Apache's log4net.

    Read the article

  • NRF Online Merchandising Workshop: Where Online Retailers Are Focusing for Holiday and Beyond

    - by Rose Spicer-Oracle
    0 0 1 1204 6863 Oracle Corporation 57 16 8051 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Last month we attended the NRF Online Merchandising Workshop in LA, and it was a great opportunity to catch up with our customers, meet new retailers, and hear some great presentations from VF Corporation, Zazzle, Julep Beauty, Backcountry, eBags and more. The one-on-one conversations with Merchants and the keynote presentations carry the same themes across companies of all sizes and across verticals. With only 125 days left (and counting) until Black Friday, these conversations provided some great insight in to what’s top of mind for retailers during the most stressful time of their year, and a sneak peek in to what they will deliver this holiday season.  Some of the most popular topics were: When to start promoting for holiday: seems like a funny conversation to have in July, but a number of retailers said they already had their holiday shopping gift guides live on their site, and it was attracting a significant portion of their onsite traffic. When it comes to timing, most retailers were questioning when to begin their holiday promotions -- carefully balancing when to release pricing and specials, and knowing that customers are holding out for last-minute deals and price drops. Many retailers noted the frustrations around transparent pricing by Amazon and a few other mega-retailers last year, publishing their “lowest prices of the season” as early as October – ensuring shoppers that those prices were the best they could get all season long. Many retailers felt their hands were forced to drop prices. Others kept their set pricing with negative customer reaction, causing some to miss their holiday goals. The pressure is on, and most retailers identified November 1 as their target start date for the holiday promotions blitz. Some are even waiting for the big guys to release their “lowest prices of the season” guides and will then follow suit.      Attribution is tough – and a huge focus: understanding the path to conversion is a tough nut to crack, especially in the new omnichannel world where consumers use multiple touchpoints to make a single purchase, and internal management wants to know hard data. This has lead many retailers to invest in attribution; carefully tracking their online marketing efforts to determine what gets “credit” for the sale, instead of giving credit to the “last click.” Retailers noted that it is very difficult to determine the numbers when online and offline worlds collide – like when a shopper uses digital channels for research and then makes a purchase in a store. As one of the presenters from The North Face mentioned in her keynote, a key to enabling better customer service and satisfaction when it comes to converged online and offline sales is training the in-store staff, and creating a culture where it eventually “doesn’t matter what group gets the credit” if they all add to the sale. No doubt, the area of attribution will be a big area of retail investment in the coming years.      How to plan for the converged world: planning to ensure inventory gets where it needs to be was another concern. In conversations with retailers, we advised them to analyze customer patterns: where shoppers purchase items, where the items were sourced from and even where items are returned. This analysis is very valuable in determining inventory plans. From there, retailers can more accurately plan and allocate inventory to support both the online and offline customer behavior. As we head into the holiday season, the need for accurate enterprise-wide inventory visibility, and providing that information to associates, is even more critical to the brand-wide customer experience.       Improving the search / navigation / usability of the site(s): Aside from some of the big ideas and standard holiday pricing pressure, most conversations we had centered around continuing to improve the basics of the site. Reinvesting in search and navigation came up time and time again (FitForCommerce blogged about what a big topic it was at the event as well). Obviously getting shoppers on their path quickly and allowing them to find what they need fast is critical, but it was definitely interesting to hear just how much effort is still going in to honing the search and navigation experience. Adding new elements to search and navigation like typeahed, inventive navigation refinements, and new navigation categories like gift guides, specialized boutiques and flash sales were top of mind, in addition to searchandising and making search-driven product recommendations. (Oracle can help!)       Reducing cart abandonment: always a hot topic that is top of mind for every online retailer. Getting shoppers to the cart is often less then half the battle; getting them to click “buy” and complete the transaction is much more difficult. While retailers carefully study the checkout process and where shoppers tend to bounce, they know that how they design their checkout page is critical. We’re all online shoppers in our personal lives and we know how frustrating it can be when total prices are not transparent (i.e. shipping, processing, taxes is not included until the very last possible screen before clicking that buy button). Online retailers are struggling with where in the checkout process to surface the total price to be charged to reduce cart abandonment, while not showing the total figure too early in the process that it keeps shoppers from getting to checkout altogether. Recent research shows that providing total pricing prior to the checkout process dramatically reduces cart abandonment – as it serves as a filter to those shopping within a specific price band. Much of the cart abandonment discussion leads us to…       The free shipping / free returns question: it’s no secret that because of Amazon and programs like Prime, consumers expect free shipping, much to the chagrin of the smaller retailer. The reality is that if you’re not a mega-retailer, shipping is an expensive part of doing business that doesn’t allow most retailers to keep their prices low and offer free shipping. This has many retailers venturing out on the “free returns” path, especially in apparel. A number of retailers we spoke with are testing a flat rate shipping fee with free returns to see if they can crack the price threshold where shoppers are willing to pay for shipping with an added service. But, free shipping remains king.      Social ads and retargeting: they are working, but do they turn off consumers? That’s the big question. Every retailer we spoke with during a roundtable on the topic said that social ads and retargeting (where that pair of boots you’re been eyeing on a site magically follows you around the Internet) work and are meeting campaign goals. The larger question many retailers are asking is if this type of tactic is turning off a large number of shoppers, even if these campaigns are meeting their early goals. Retailers also mentioned that Facebook ads are working very well for them, especially when it comes to new customer acquisition, serving as a complimentary a channel to SEO when it comes to engaging new customers. While there are always new things to experiment with in retail, standard challenges are top of mind as retailers scramble to get ready for holiday. It will undoubtedly be another record-breaking online shopping season, but as retailers get more and more advanced with each Black Friday, expect some exciting things. This excitement needs to be backed by sound solutions and optimized operations. Then again, consumers are expecting more than ever, so I don’t doubt that retailers are already thinking about the possibilities of holiday 2015… and beyond. Customers who read this article, also found value in the following stories: Personalization for Retail: http://blogs.oracle.com/retail/entry/personalization_for_retailShop Direct User Experience Focus Drives Sales:https://blogs.oracle.com/retail/entry/shop_direct_user_experience_focusMaking Waves: Australian Online Retailer SurfStitch: https://blogs.oracle.com/oracleretail/entry/surf_stitchWhat’s new in Oracle Commerce v11.1 for RetailWhat the Content+Commerce Equation is Missing

    Read the article

  • Standards Corner: OAuth WG Client Registration Problem

    - by Tanu Sood
    Phil Hunt is an active member of multiple industry standards groups and committees (see brief bio at the end of the post) and has spearheaded discussions, creation and ratifications of  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt This afternoon, the OAuth Working Group will meet at IETF88 in Vancouver to discuss some important topics important to the maturation of OAuth. One of them is the OAuth client registration problem.OAuth (RFC6749) was initially developed with a simple deployment model where there is only monopoly or singleton cloud instance of a web API (e.g. there is one Facebook, one Google, on LinkedIn, and so on). When the API publisher and API deployer are the same monolithic entity, it easy for developers to contact the provider and register their app to obtain a client_id and credential.But what happens when the API is for an open source project where there may be 1000s of deployed copies of the API (e.g. such as wordpress). In these cases, the authors of the API are not the people running the API. In these scenarios, how does the developer obtain a client_id? An example of an "open deployed" API is OpenID Connect. Connect defines an OAuth protected resource API that can provide personal information about an authenticated user -- in effect creating a potentially common API for potential identity providers like Facebook, Google, Microsoft, Salesforce, or Oracle. In Oracle's case, Fusion applications will soon have RESTful APIs that are deployed in many different ways in many different environments. How will developers write apps that can work against an openly deployed API with whom the developer can have no prior relationship?At present, the OAuth Working Group has two proposals two consider: Dynamic RegistrationDynamic Registration was originally developed for OpenID Connect and UMA. It defines a RESTful API in which a prospective client application with no client_id creates a new client registration record with a service provider and is issued a client_id and credential along with a registration token that can be used to update registration over time.As proof of success, the OIDC community has done substantial implementation of this spec and feels committed to its use. Why not approve?Well, the answer is that some of us had some concerns, namely: Recognizing instances of software - dynamic registration treats all clients as unique. It has no defined way to recognize that multiple copies of the same client are being registered other then assuming if the registration parameters are similar it might be the same client. Versioning and Policy Approval of open APIs and clients - many service providers have to worry about change management. They expect to have approval cycles that approve versions of server and client software for use in their environment. In some cases approval might be wide open, but in many cases, approval might be down to the specific class of software and version. Registration updates - when does a client actually need to update its registration? Shouldn't it be never? Is there some characteristic of deployed code that would cause it to change? Options lead to complexity - because each client is treated as unique, it becomes unclear how the clients and servers will agree on what credentials forms are acceptable and what OAuth features are allowed and disallowed. Yet the reality is, developers will write their application to work in a limited number of ways. They can't implement all the permutations and combinations that potential service providers might choose. Stateful registration - if the primary motivation for registration is to obtain a client_id and credential, why can't this be done in a stateless fashion using assertions? Denial of service - With so much stateful registration and the need for multiple tokens to be issued, will this not lead to a denial of service attack / risk of resource depletion? At the very least, because of the information gathered, it would difficult for service providers to clean up "failed" registrations and determine active from inactive or false clients. There has yet to be much wide-scale "production" use of dynamic registration other than in small closed communities. Client Association A second proposal, Client Association, has been put forward by Tony Nadalin of Microsoft and myself. We took at look at existing use patterns to come up with a new proposal. At the Berlin meeting, we considered how WS-STS systems work. More recently, I took a review of how mobile messaging clients work. I looked at how Apple, Google, and Microsoft each handle registration with APNS, GCM, and WNS, and a similar pattern emerges. This pattern is to use an existing credential (mutual TLS auth), or client bearer assertion and swap for a device specific bearer assertion.In the client association proposal, the developer's registration with the API publisher is handled by having the developer register with an API publisher (as opposed to the party deploying the API) and obtaining a software "statement". Or, if there is no "publisher" that can sign a statement, the developer may include their own self-asserted software statement.A software statement is a special type of assertion that serves to lock application registration profile information in a signed assertion. The statement is included with the client application and can then be used by the client to swap for an instance specific client assertion as defined by section 4.2 of the OAuth Assertion draft and profiled in the Client Association draft. The software statement provides a way for service provider to recognize and configure policy to approve classes of software clients, and simplifies the actual registration to a simple assertion swap. Because the registration is an assertion swap, registration is no longer "stateful" - meaning the service provider does not need to store any information to support the client (unless it wants to). Has this been implemented yet? Not directly. We've only delivered draft 00 as an alternate way of solving the problem using well-known patterns whose security characteristics and scale characteristics are well understood. Dynamic Take II At roughly the same time that Client Association and Software Statement were published, the authors of Dynamic Registration published a "split" version of the Dynamic Registration (draft-richer-oauth-dyn-reg-core and draft-richer-oauth-dyn-reg-management). While some of the concerns above are addressed, some differences remain. Registration is now a simple POST request. However it defines a new method for issuing client tokens where as Client Association uses RFC6749's existing extension point. The concern here is whether future client access token formats would be addressed properly. Finally, Dyn-reg-core does not yet support software statements. Conclusion The WG has some interesting discussion to bring this back to a single set of specifications. Dynamic Registration has significant implementation, but Client Association could be a much improved way to simplify implementation of the overall OpenID Connect specification and improve adoption. In fairness, the existing editors have already come a long way. Yet there are those with significant investment in the current draft. There are many that have expressed they don't care. They just want a standard. There is lots of pressure on the working group to reach consensus quickly.And that folks is how the sausage is made.Note: John Bradley and Justin Richer recently published draft-bradley-stateless-oauth-client-00 which on first look are getting closer. Some of the details seem less well defined, but the same could be said of client-assoc and software-statement. I hope we can merge these specs this week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About the Writer: Phil Hunt joined Oracle as part of the November 2005 acquisition of OctetString Inc. where he headed software development for what is now Oracle Virtual Directory. Since joining Oracle, Phil works as CMTS in the Identity Standards group at Oracle where he developed the Kantara Identity Governance Framework and provided significant input to JSR 351. Phil participates in several standards development organizations such as IETF and OASIS working on federation, authorization (OAuth), and provisioning (SCIM) standards.  Phil blogs at www.independentid.com and a Twitter handle of @independentid.

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >