Search Results

Search found 5484 results on 220 pages for 'mod headers'.

Page 128/220 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Android Google cloud messaging - not certain what parameters I should put when creating the push notification

    - by Genadinik
    I am working on a php script to send the notification to the CGM server and I am working from this example: public function send_notification($registatoin_ids, $message) { // include config include_once './config.php'; // Set POST variables $url = 'https://android.googleapis.com/gcm/send'; $fields = array( 'registration_ids' => $registatoin_ids, 'data' => $message, ); $headers = array( 'Authorization: key=' . GOOGLE_API_KEY, 'Content-Type: application/json' ); // Open connection $ch = curl_init(); // Set the url, number of POST vars, POST data curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Disabling SSL Certificate support temporarly curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($fields)); // Execute post $result = curl_exec($ch); if ($result === FALSE) { die('Curl failed: ' . curl_error($ch)); } // Close connection curl_close($ch); echo $result; } But I am not certain what the values should be for the variables: CURLOPT_POSTFIELDS , CURLOPT_SSL_VERIFYPEER , CURLOPT_RETURNTRANSFER , CURLOPT_HOST , CURLOPT_URL Would anyone happen to know what the values for these should be? Thank you!

    Read the article

  • php session_start() error

    - by tooepic
    Hi, i've used a sample found on online and applied it to my code: <?php session_start(); if (isset($_REQUEST["email"])) { $_SESSION["name"] = true; $host = $_SERVER["HTTP_HOST"]; $path = dirname($_SERVER["PHP_SELF"]); $sid = session_name() . "=" . session_id(); header("Location: index.php?$sid"); exit; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> ... ... and rest of the html code When I open this page, I got an error: Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /data/server/user/directory/sub-directory/login.php:1) in /data/server/user/directory/sub-directory/login.php on line 2 Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /data/server/user/directory/sub-directory/login.php:1) in /data/server/user/directory/sub-directory/login.php on line 2 I looked around to resolve this issue and saw few posts about this in this site also, but I just can't get a good grip on this...can't find the answer. Please help. Thanks.

    Read the article

  • How to debug python del self.callbacks[s][cid] keyError when the error message does not indicate where in my code the error is

    - by lkloh
    In a python program I am writing, I get an error saying Traceback (most recent call last): File "/Applications/Canopy.app/appdata/canopy-1.4.0.1938.macosx- x86_64/Canopy.app/Contents/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__ return self.func(*args) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 413, in button_release_event FigureCanvasBase.button_release_event(self, x, y, num, guiEvent=event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 1808, in button_release_event self.callbacks.process(s, event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/cbook.py", line 525, in process del self.callbacks[s][cid] KeyError: 103 Do you have any idea how I can debug this/ what could be wrong? The error message does not point to anywhere in code I have personally written. I get the error message only after I close my GUI window, but I want to fix it even though it does not break the functionality of my code. The error is part of a very big program I am writing, so I cannot post all my code, but below is code I think is relevant: def save(self, event): self.getSaveAxes() self.save_connect() def getSaveAxes(self): saveFigure = figure(figsize=(8,1)) saveFigure.clf() # size of save buttons rect_saveHeaders = [0.04,0.2,0.2,0.6] rect_saveHeadersFilterParams = [0.28,0.2,0.2,0.6] rect_saveHeadersOverride = [0.52,0.2,0.2,0.6] rect_saveQuit = [0.76,0.2,0.2,0.6] #initalize axes saveAxs = {} saveAxs['saveHeaders'] = saveFigure.add_axes(rect_saveHeaders) saveAxs['saveHeadersFilterParams'] = saveFigure.add_axes(rect_saveHeadersFilterParams) saveAxs['saveHeadersOverride'] = saveFigure.add_axes(rect_saveHeadersOverride) saveAxs['saveQuit'] = saveFigure.add_axes(rect_saveQuit) self.saveAxs = saveAxs self.save_connect() self.saveFigure = saveFigure show() def save_connect(self): #set buttons self.bn_saveHeaders = Button(self.saveAxs['saveHeaders'], 'Save\nHeaders\nOnly') self.bn_saveHeadersFilterParams = Button(self.saveAxs['saveHeadersFilterParams'], 'Save Headers &\n Filter Parameters') self.bn_saveHeadersOverride = Button(self.saveAxs['saveHeadersOverride'], 'Save Headers &\nOverride Data') self.bn_saveQuit = Button(self.saveAxs['saveQuit'], 'Quit') #connect buttons to functions they trigger self.cid_saveHeaders = self.bn_saveHeaders.on_clicked(self.save_headers) self.cid_savedHeadersFilterParams = self.bn_saveHeadersFilterParams.on_clicked(self.save_headers_filterParams) self.cid_saveHeadersOverride = self.bn_saveHeadersOverride.on_clicked(self.save_headers_override) self.cid_saveQuit = self.bn_saveQuit.on_clicked(self.save_quit) def save_quit(self, event): self.save_disconnect() close()

    Read the article

  • .NET MVC What is the best way to disable browser caching?

    - by Chameera Dedduwage
    As far as my research goes, there are several steps in order to make sure that browser caching is disabled. These HTTP headers must be set: Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate Pragma: no-cache, no-store Expires: -1 Last-Modified: -1 I have found out that this can be done in two ways: Way One: use the web.config file <add name="Cache-Control" value="no-store, no-cache, must-revalidate, proxy-revalidate"/> <add name="Pragma" value="no-cache, no-store" /> <add name="Expires" value="-1" /> <add name="Last-Modified" value="-1" /> Way Two: use the meta tags in _Layout.cshtml <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate, proxy-revalidate" /> <meta http-equiv="Pragma" content="no-cache, no-store" /> <meta http-equiv="Expires" content="-1" /> <meta http-equiv="Expires" content="-1" /> My Question: which is the better approach? Or, alternatively, are they equally acceptable? How do these all relate to different platforms? Which browsers would honor what headers? In addition, please feel free to add anything I've missed, if any.

    Read the article

  • problems piping in node.js

    - by alvaizq
    We have the following example in node.js var http = require('http'); http.createServer(function(request, response) { var proxy = http.createClient(8083, '127.0.0.1') var proxy_request = proxy.request(request.method, request.url, request.headers); proxy_request.on('response', function (proxy_response) { proxy_response.pipe(response); response.writeHead(proxy_response.statusCode, proxy_response.headers); }); setTimeout(function(){ request.pipe(proxy_request); },3000); }).listen(8081, '127.0.0.1'); The example listen to a request in 127.0.0.1:8081 and sends it to a dummy server (always return 200 OK status code) in 127.0.0.1:8083. The problem is in the pipe among the input stream (readable) and output stream (writable) when we have a async module before (in this case the setTimeOut timing). The pipe doesn't work and nothing is sent to dummy server in 8083 port. Maybe, when we have a async call (in this case the setTimeOut) before the pipe call, the inputstream change to a state "not readable", and after the async call the pipe doesn't send anything. This is just an example...we test it with more async modules from node.js community with the same result (ldapjs, etc)... We try to fix it with: - request.readable =true; //before pipe call - request.pipe(proxy_request, {end : false}); with the same result (the pipe doesn't work). Can anybody help us? Many thanks in advanced and best regards,

    Read the article

  • HTTP post using php and curl failing

    - by user2916484
    I am trying to send an XML file to an external system. I am using the below code for doing so, which I got over the internet. But I observed that when i put an echo on the xml variable, it does not show me the XML as a string, but it is parsing the xml and showing me the values. Same is happening when I am sending this to external system. Which is failing. Can you please tell me a way, in which the XML is not parsed and I can send the entire XML text as a string to external system? My code is below. <?php $xml = '<?xml version=\"1.0\" encoding="UTF-8" ?><shiporder><orderID>1234</orderID> <orderperson>John Smith</orderperson></shiporder>'; $xml2 = ''; $headers = array( "Content-type: text/xml", "Content-length: " . strlen($xml), "Connection: close", ); // give the path of the Third party site $url = "http://<server>:<port>/XMII/Illuminator?service=WSMessageListener&mode=WSMessageListenerServer&NAME=shiporder&IllumLoginName=<name>&IllumLoginPassword=<pswd>"; echo $xml; echo $url; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); //curl_setopt($ch, CURLOPT_MUTE, 1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_POSTFIELDS, $xml); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); echo $output; curl_close($ch); ?> Regards, Gita

    Read the article

  • Creating a new variable in C from only part of an existing u_char

    - by Alex Kloss
    I'm writing some C code to parse IEEE 802.11 frames, but I'm stuck trying to create a new variable whose length depends on the size of the frame itself. Here's the code I currently have: int frame_body_len = pkt_hdr->len - radio_hdr->len - wifi_hdr_len - 4; u_char *frame_body = (u_char *) (packet + radio_hdr->len + wifi_hdr_len); Basically, the frame consists of a header, a body, and a checksum at the end. I can calculate the length of the frame body by taking the length of the packet and subtracting the length of the two headers that appear before it (radio_hdr->len and wifi_hdr_len respectively), plus 4 bytes at the end for the checksum. However, how can I create the frame_body variable without the trailing checksum? Right now, I'm initializing it with the contents of the packet starting at the position after the two headers, but is there some way to start at that position and end 4 bytes before the end of packet? packet is a pointer to a u_char, if it helps. I'm a new C programmer, so any and all advice about my code you can give me would be much appreciated. Thanks!

    Read the article

  • PHP Mail() not working on remote server

    - by Amaerth
    I am developing an application and have been testing the mail() function in PHP. The following works just fine on my local machine to send emails to myself, but as soon as I try to send it from the testing environment to my local machine, it silently fails. I will still get the "Mail Sent" message, but no message is sent. I turned on the mail logging in the php.ini file, but even that doesn't seem to be populated after I refresh the page. Again, the .php files and php.ini files are identical in both environments. Port 25 has been opened on the testing environment, and we are using a Microsoft Exchange server. <?php $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From:" . $from; mail($to,$subject,$message,$headers); echo "Mail Sent."; ?> SMTP area of the php.ini file: [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = exhange.server.org ; http://php.net/smtp-port smtp_port = 25 ; For Win32 only. ; http://php.net/sendmail-from sendmail_from = [email protected]

    Read the article

  • InputStreamBody in HttpMime 4.0.3 settings for content-length

    - by Ram
    I am trying to send a multi part formdata post through my java code. Can someone tell me how to set Content Length in the following?? There seem to be headers involved when we use InputStreamBody which implements the ContentDescriptor interface. Doing a getContentLength on the InputStreamBody gives me -1 after i add the content. I subclassed it to give contentLength the length of my byte array but am not sure if other headers required by ContentDescriptor will be set for a proper POST. HttpClient httpclient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(myURL); ContentBody cb = new InputStreamBody(new ByteArrayInputStream(bytearray), myMimeType, filename); //ContentBody cb = new ByteArrayBody(bytearray, myMimeType, filename); MultipartEntity mpentity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE); mpentity.addPart("key", new StringBody("SOME_KEY")); mpentity.addPart("output", new StringBody("SOME_NAME")); mpentity.addPart("content", cb); httpPost.setEntity(mpentity); HttpResponse response = httpclient.execute(httpPost); HttpEntity resEntity = response.getEntity();

    Read the article

  • table in drupal with edit link

    - by user544079
    I have a table created in drupal with the edit link pointing to the input form. But the problem is, it only displays the last row values in the $email and $comment variables. Can anyone suggest how to modify the table display to have the edit link to the corresponding records? function _MYMODULE_sql_to_table($sql) { $html = ""; // execute sql $resource = db_query($sql); // fetch database results in an array $results = array(); while ($row = db_fetch_array($resource)) { $results[] = $row; $email = $row['email']; $comment = $row['comment']; drupal_set_message('Email: '.$email. ' comment: '.$comment); } // ensure results exist if (!count($results)) { $html .= "Sorry, no results could be found."; return $html; } // create an array to contain all table rows $rows = array(); // get a list of column headers $columnNames = array_keys($results[0]); // loop through results and create table rows foreach ($results as $key => $data) { // create row data $row = array( 'edit' => l(t('Edit'),"admin/content/test/$email/$comment/ContactUs", $options=array()),); // loop through column names foreach ($columnNames as $c) { $row[] = array( 'data' => $data[$c], 'class' => strtolower(str_replace(' ', '-', $c)), ); } // add row to rows array $rows[] = $row; } // loop through column names and create headers $header = array(); foreach ($columnNames as $c) { $header[] = array( 'data' = $c, 'class' = strtolower(str_replace(' ', '-', $c)), ); } // generate table html $html .= theme('table', $header, $rows); return $html; } // then you can call it in your code... function _MYMODULE_some_page_callback() { $html = ""; $sql = "select * from {contactus}"; $html .= _MYMODULE_sql_to_table($sql); return $html; }

    Read the article

  • php Mail function; Is this way of using it safe?

    - by Camran
    I have a classifieds website, and inside each classified, there is a small form. This form is for users to be able to tip their "friends": <form action="/bincgi/tip.php" method="post" name="tipForm" id="tipForm"> Tip: <input name="email2" id="email2" type="text" size="30 /> <input type="submit" value="Skicka Tips"/> <input type="hidden" value="<?php echo $ad_id;?>" name="ad_id2" id="ad_id2" /> <input type="hidden" value="<?php echo $headline;?>" name="headline2" id="headline2" /> </form> The form is then submitted to a tip.php page, and here is my Q, is this below code safe, ie is it good enough or do I need to make some sanitations and more safety details? $to = filter_var($_POST['email2'], FILTER_SANITIZE_EMAIL); $ad_id = $_POST['ad_id2']; $headline = $_POST['headline2']; $subject = 'You got a tip'; $message ='Hi. You got a tip: '.$headline.'.\n'; $headers = 'From: [email protected]\r\n'; mail($to, $subject, $message, $headers); I haven't tested the above yet.

    Read the article

  • Sending email with PHP mail()

    - by david_85
    I'm trying to send automated emails with mail(). It sends some emails but not all, around 50%. To test I'm using the same email address for all emails, and still only some get delivered. I'm using localhost XAMPP. Here's the code: if($_POST['sendEmail'] == "SEND Email"){ ob_start(); $buffer = str_repeat(" ", 4096); $buffer .= "\r\n some HTML \r\n"; set_time_limit(0); $noEmails = $last - $first + 1; echo "Emails sent (of $noEmails):"; for($index = $first; $index <= $last; $index++){ $to = $email["$index"]; $subject = "Hey {$firstName["$index"]}!"; $message = "$emailMessage"; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); sleep(1); mail($to,$subject,$message,$headers); echo $buffer.$index; ob_flush(); flush(); } ob_end_flush(); } Please give your suggestions.

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • Try a sample: Using the counter predicate for event sampling

    - by extended_events
    Extended Events offers a rich filtering mechanism, called predicates, that allows you to reduce the number of events you collect by specifying criteria that will be applied during event collection. (You can find more information about predicates in Using SQL Server 2008 Extended Events (by Jonathan Kehayias)) By evaluating predicates early in the event firing sequence we can reduce the performance impact of collecting events by stopping event collection when the criteria are not met. You can specify predicates on both event fields and on a special object called a predicate source. Predicate sources are similar to action in that they typically are related to some type of global information available from the server. You will find that many of the actions available in Extended Events have equivalent predicate sources, but actions and predicates sources are not the same thing. Applying predicates, whether on a field or predicate source, is very similar to what you are used to in T-SQL in terms of how they work; you pick some field/source and compare it to a value, for example, session_id = 52. There is one predicate source that merits special attention though, not just for its special use, but for how the order of predicate evaluation impacts the behavior you see. I’m referring to the counter predicate source. The counter predicate source gives you a way to sample a subset of events that otherwise meet the criteria of the predicate; for example you could collect every other event, or only every tenth event. Simple CountingThe counter predicate source works by creating an in memory counter that increments every time the predicate statement is evaluated. Here is a simple example with my favorite event, sql_statement_completed, that only collects the second statement that is run. (OK, that’s not much of a sample, but this is for demonstration purposes. Here is the session definition: CREATE EVENT SESSION counter_test ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.counter = 2)ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS) You can find general information about the session DDL syntax in BOL and from Pedro’s post Introduction to Extended Events. The important part here is the WHERE statement that defines that I only what the event where package0.count = 2; in other words, only the second instance of the event. Notice that I need to provide the package name along with the predicate source. You don’t need to provide the package name if you’re using event fields, only for predicate sources. Let’s say I run the following test queries: -- Run three statements to test the sessionSELECT 'This is the first statement'GOSELECT 'This is the second statement'GOSELECT 'This is the third statement';GO Once you return the event data from the ring buffer and parse the XML (see my earlier post on reading event data) you should see something like this: event_name sql_text sql_statement_completed SELECT ‘This is the second statement’ You can see that only the second statement from the test was actually collected. (Feel free to try this yourself. Check out what happens if you remove the WHERE statement from your session. Go ahead, I’ll wait.) Percentage Sampling OK, so that wasn’t particularly interesting, but you can probably see that this could be interesting, for example, lets say I need a 25% sample of the statements executed on my server for some type of QA analysis, that might be more interesting than just the second statement. All comparisons of predicates are handled using an object called a predicate comparator; the simple comparisons such as equals, greater than, etc. are mapped to the common mathematical symbols you know and love (eg. = and >), but to do the less common comparisons you will need to use the predicate comparators directly. You would probably look to the MOD operation to do this type sampling; we would too, but we don’t call it MOD, we call it divides_by_uint64. This comparator evaluates whether one number is divisible by another with no remainder. The general syntax for using a predicate comparator is pred_comp(field, value), field is always first and value is always second. So lets take a look at how the session changes to answer our new question of 25% sampling: CREATE EVENT SESSION counter_test_25 ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.divides_by_uint64(package0.counter,4))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO Here I’ve replaced the simple equivalency check with the divides_by_uint64 comparator to check if the counter is evenly divisible by 4, which gives us back every fourth record. I’ll leave it as an exercise for the reader to test this session. Why order matters I indicated at the start of this post that order matters when it comes to the counter predicate – it does. Like most other predicate systems, Extended Events evaluates the predicate statement from left to right; as soon as the predicate statement is proven false we abandon evaluation of the remainder of the statement. The counter predicate source is only incremented when it is evaluated so whether or not the counter is incremented will depend on where it is in the predicate statement and whether a previous criteria made the predicate false or not. Here is a generic example: Pred1: (WHERE statement_1 AND package0.counter = 2)Pred2: (WHERE package0.counter = 2 AND statement_1) Let’s say I cause a number of events as follows and examine what happens to the counter predicate source. Iteration Statement Pred1 Counter Pred2 Counter A Not statement_1 0 1 B statement_1 1 2 C Not statement_1 1 3 D statement_1 2 4 As you can see, in the case of Pred1, statement_1 is evaluated first, when it fails (A & C) predicate evaluation is stopped and the counter is not incremented. With Pred2 the counter is evaluated first, so it is incremented on every iteration of the event and the remaining parts of the predicate are then evaluated. In this example, Pred1 would return an event for D while Pred2 would return an event for B. But wait, there is an interesting side-effect here; consider Pred2 if I had run my statements in the following order: Not statement_1 Not statement_1 statement_1 statement_1 In this case I would never get an event back from the system because the point at which counter=2, the rest of the predicate evaluates as false so the event is not returned. If you’re using the counter target for sampling and you’re not getting the expected events, or any events, check the order of the predicate criteria. As a general rule I’d suggest that the counter criteria should be the last element of your predicate statement since that will assure that your sampling rate will apply to the set of event records defined by the rest of your predicate. Aside: I’m interested in hearing about uses for putting the counter predicate criteria earlier in the predicate statement. If you have one, post it in a comment to share with the class. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • 12c - Utl_Call_Stack...

    - by noreply(at)blogger.com (Thomas Kyte)
    Over the next couple of months, I'll be writing about some cool new little features of Oracle Database 12c - things that might not make the front page of Oracle.com.  I'm going to start with a new package - UTL_CALL_STACK.In the past, developers have had access to three functions to try to figure out "where the heck am I in my code", they were:dbms_utility.format_call_stackdbms_utility.format_error_backtracedbms_utility.format_error_stackNow these routines, while useful, were of somewhat limited use.  Let's look at the format_call_stack routine for a reason why.  Here is a procedure that will just print out the current call stack for us:ops$tkyte%ORA12CR1> create or replace  2  procedure Print_Call_Stack  3  is  4  begin  5    DBMS_Output.Put_Line(DBMS_Utility.Format_Call_Stack());  6  end;  7  /Procedure created.Now, if we have a package - with nested functions and even duplicated function names:ops$tkyte%ORA12CR1> create or replace  2  package body Pkg is  3    procedure p  4    is  5      procedure q  6      is  7        procedure r  8        is  9          procedure p is 10          begin 11            Print_Call_Stack(); 12            raise program_error; 13          end p; 14        begin 15          p(); 16        end r; 17      begin 18        r(); 19      end q; 20    begin 21      q(); 22    end p; 23  end Pkg; 24  /Package body created.When we execute the procedure PKG.P - we'll see as a result:ops$tkyte%ORA12CR1> exec pkg.p----- PL/SQL Call Stack -----  object      line  object  handle    number  name0x6e891528         4  procedure OPS$TKYTE.PRINT_CALL_STACK0x6ec4a7c0        10  package body OPS$TKYTE.PKG0x6ec4a7c0        14  package body OPS$TKYTE.PKG0x6ec4a7c0        17  package body OPS$TKYTE.PKG0x6ec4a7c0        20  package body OPS$TKYTE.PKG0x76439070         1  anonymous blockBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1The bit in red above is the output from format_call_stack whereas the bit in black is the error message returned to the client application (it would also be available to you via the format_error_backtrace API call). As you can see - it contains useful information but to use it you would need to parse it - and that can be trickier than it seems.  The format of those strings is not set in stone, they have changed over the years (I wrote the "who_am_i", "who_called_me" functions, I did that by parsing these strings - trust me, they change over time!).Starting in 12c - we'll have structured access to the call stack and a series of API calls to interrogate this structure.  I'm going to rewrite the print_call_stack function as follows:ops$tkyte%ORA12CR1> create or replace 2  procedure Print_Call_Stack  3  as  4    Depth pls_integer := UTL_Call_Stack.Dynamic_Depth();  5    6    procedure headers  7    is  8    begin  9        dbms_output.put_line( 'Lexical   Depth   Line    Name' ); 10        dbms_output.put_line( 'Depth             Number      ' ); 11        dbms_output.put_line( '-------   -----   ----    ----' ); 12    end headers; 13    procedure print 14    is 15    begin 16        headers; 17        for j in reverse 1..Depth loop 18          DBMS_Output.Put_Line( 19            rpad( utl_call_stack.lexical_depth(j), 10 ) || 20                    rpad( j, 7) || 21            rpad( To_Char(UTL_Call_Stack.Unit_Line(j), '99'), 9 ) || 22            UTL_Call_Stack.Concatenate_Subprogram 23                       (UTL_Call_Stack.Subprogram(j))); 24        end loop; 25    end; 26  begin 27    print; 28  end; 29  /Here we are able to figure out what 'depth' we are in the code (utl_call_stack.dynamic_depth) and then walk up the stack using a loop.  We will print out the lexical_depth, along with the line number within the unit we were executing plus - the unit name.  And not just any unit name, but the fully qualified, all of the way down to the subprogram name within a package.  Not only that - but down to the subprogram name within a subprogram name within a subprogram name.  For example - running the PKG.P procedure again results in:ops$tkyte%ORA12CR1> exec pkg.pLexical   Depth   Line    NameDepth             Number-------   -----   ----    ----1         6       20      PKG.P2         5       17      PKG.P.Q3         4       14      PKG.P.Q.R4         3       10      PKG.P.Q.R.P0         2       26      PRINT_CALL_STACK1         1       17      PRINT_CALL_STACK.PRINTBEGIN pkg.p; END;*ERROR at line 1:ORA-06501: PL/SQL: program errorORA-06512: at "OPS$TKYTE.PKG", line 11ORA-06512: at "OPS$TKYTE.PKG", line 14ORA-06512: at "OPS$TKYTE.PKG", line 17ORA-06512: at "OPS$TKYTE.PKG", line 20ORA-06512: at line 1This time - we get much more than just a line number and a package name as we did previously with format_call_stack.  We not only got the line number and package (unit) name - we got the names of the subprograms - we can see that P called Q called R called P as nested subprograms.  Also note that we can see a 'truer' calling level with the lexical depth, we can see we "stepped" out of the package to call print_call_stack and that in turn called another nested subprogram.This new package will be a nice addition to everyone's error logging packages.  Of course there are other functions in there to get owner names, the edition in effect when the code was executed and more. See UTL_CALL_STACK for all of the details.

    Read the article

  • Mysterious dbboon folder with proxy.php file on my godaddy account

    - by Paul
    When doing some web maintenance today, I noticed a strange new folder on my GoDaddy hosting account at the root level named "dbboon", with a single file inside, called proxy.php. It's code is listed below, and seems to be some sort of proxy function. I was kind of troubled because I didn't put it there. I googled all this to learn more, but didn't find anything, except for the proxy file happened to be also stored at pastebin.com: http://pastebin.com/PQsSPbCr I called GoDaddy and they confirmed that it belonged to them, said it was put there by their advanced hosting group for testing purposes but didn't have any more information. I thought this was all really weird: why would they put something in my folder without giving me a heads-up, and why would they need to do something like this? anybody know anything about this? <?php $version = '1.2'; if(isset($_GET['dbboon_version'])) { echo '{"version":"' . $version . '"}'; exit; } function dbboon_parseHeaders($subject) { global $version; $subject = trim($subject); $parsed = Array(); $len = strlen($subject); $position = $field = 0; $position = strpos($subject, "\r\n") + 2; while(isset($subject[$position])) { $nextC = strpos($subject, ':', $position); $fieldName = substr($subject, $position, ($nextC-$position)); $position += strlen($fieldName) + 1; $fieldValue = NULL; while(1) { $nextCrlf = strpos($subject, "\r\n", $position - 1); if(FALSE === $nextCrlf) { $t = substr($subject, $position); $position = $len; } else { $t = substr($subject, $position, $nextCrlf-$position); $position += strlen($t) + 2; } $fieldValue .= $t; if(!isset($subject[$position]) || (' ' != $subject[$position] && "\t" != $subject[$position])) { break; } } $parsed[strtolower($fieldName)] = trim($fieldValue); if($position > $len) { echo '{"result":false,"error":{"code":4,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } } return $parsed; } if(!function_exists('http_build_query')) { function http_build_query($data, $prefix = '', $sep = '', $key = '') { $ret = Array(); foreach((array) $data as $k => $v) { if(is_int($k) && NULL != $prefix) { $k = urlencode($prefix . $k); } if(!empty($key) || $key === 0) { $k = $key . '[' . urlencode($k) . ']'; } if(is_array($v) || is_object($v)) { array_push($ret, http_build_query($v, '', $sep, $k)); } else { array_push($ret, $k . '=' . urlencode($v)); } } if(empty($sep)) { $sep = '&'; } return implode($sep, $ret); } } $host = 'dbexternalsubscriber.secureserver.net'; $get = http_build_query($_GET); $post = http_build_query($_POST); $url = $get ? "?$get" : ''; $fp = fsockopen($host, 80, $errno, $errstr); if($fp) { $payload = "POST /embed/$url HTTP/1.1\r\n"; $payload .= "Host: $host\r\n"; $payload .= "Content-Length: " . strlen($post) . "\r\n"; $payload .= "Content-Type: application/x-www-form-urlencoded\r\n"; $payload .= "Connection: Close\r\n\r\n"; $payload .= $post; fwrite($fp, $payload); $httpCode = NULL; $response = NULL; $timeout = time() + 15; do { while($line = fgets($fp)) { $response .= $line; if(!trim($line)) { break; } } } while($timeout > time() && NULL === $response); $headers = dbboon_parseHeaders($response); if(isset($headers['transfer-encoding']) && 'chunked' === $headers['transfer-encoding']) { do { $cSize = $read = hexdec(trim(fgets($fp))); while($read > 0) { $buff = fread($fp, $read); $read -= strlen($buff); $response .= $buff; } $response .= fgets($fp); } while($cSize > 0); } else { preg_match('/Content-Length:\s([0-9]+)\r\n/msi', $response, $match); if(!isset($match[1])) { echo '{"result":false,"error":{"code":3,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } else { while($match[1] > 0) { $buff = fread($fp, $match[1]); $match[1] -= strlen($buff); $response .= $buff; } } } fclose($fp); if(!$pos = strpos($response, "\r\n\r\n")) { echo '{"result":false,"error":{"code":2,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; } echo substr($response, $pos + 4); } else { echo '{"result":false,"error":{"code":1,"message":"Communication error, unable to contact proxy service.","version":"' . $version . '"}}'; exit; }

    Read the article

  • Cannot Start Nginx Compiled from Source

    - by Jason Alan Kennedy
    I am trying to compile Nginx from source based on the original compiled Nginx server running on my DigitalOcean server ( Ubuntu-14.04 64x ) but with a few extra modules. I can get everything installed smoothly but I can not get it to start. I am sure the ini is correct because I copied the original source off the current running Nginx server [ Even though I see that Nginx now adds the ini when compiling fron source ]. Below is the [ lengthy process ] that I am performing - add sorry but I wanted to be thorough for those who are in need of the info ]. Because I am a newB to Nginx, I am sure I am missing something or just have it all wrong. If you may look over what I have done and see if you spot anything I need/need to change, I will greatly appreciate it. Thnx! With the original Nginx server still running: I check the current/running Nginx configuration so I can build the new Nginx instance the same but with the added modules: nginx -V # The out-put: configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module NOTE: The configure arguments below return errors during 'make' so I removed them. I don't know what they are - could this be related to my issue??? --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' Moving on: # So I don't have to sudo every line: sudo bash # Check for updates first thing: apt-get update # Install various prerequisites needed to compile Nginx: apt-get install build-essential libgd2-xpm-dev lsb-base zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libxslt1-dev libxml2 libssl-dev libgeoip-dev tar unzip openssl # Create System users [ if it doesn't exist - but I see its there on DigitalOceans' Droplets all-ready ]: adduser --system --no-create-home --disabled-login --disabled-password --group www-data # Download NGINX wget http://nginx.org/download/nginx-1.7.4.tar.gz tar -xvzf nginx-1.7.4.tar.gz # Then Google PageSpeed: wget https://github.com/pagespeed/ngx_pagespeed/archive/release-1.8.31.4-beta.zip unzip release-1.8.31.4-beta.zip # cd into the PageSpeed Directory cd ngx_pagespeed-release-1.8.31.4-beta/ # and add the PSOL files in there: wget https://dl.google.com/dl/page-speed/psol/1.8.31.4.tar.gz tar -xzvf 1.8.31.4.tar.gz # Get back to the root directory: cd # I add the ngx_cache_purge module and will install the Nginx Helper plugin for WP later: wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.1.zip unzip 2.1.zip # Add the headers-more-nginx-module: wget https://github.com/openresty/headers-more-nginx-module/archive/v0.25.zip unzip v0.25.zip # and the naxsi module for added security: wget https://github.com/nbs-system/naxsi/archive/0.53-2.tar.gz tar -xvzf 0.53-2.tar.gz # cd to the new Nginx directory cd nginx-1.7.4 # Set up the configuration build based on the current running Nginx config args and add my additional modules: ./configure \ --add-module=$HOME/naxsi-0.53-2/naxsi_src \ --prefix=/usr/share/nginx \ --conf-path=/etc/nginx/nginx.conf \ --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log \ --lock-path=/var/lock/nginx.lock \ --pid-path=/run/nginx.pid \ --http-client-body-temp-path=/var/lib/nginx/body \ --http-fastcgi-temp-path=/var/lib/nginx/fastcgi \ --http-proxy-temp-path=/var/lib/nginx/proxy \ --http-scgi-temp-path=/var/lib/nginx/scgi \ --http-uwsgi-temp-path=/var/lib/nginx/uwsgi \ --user=www-data \ --group=www-data \ --with-debug \ --with-pcre-jit \ --with-ipv6 \ --with-http_ssl_module \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_dav_module \ --with-http_geoip_module \ --with-http_gzip_static_module \ --with-http_image_filter_module \ --with-http_spdy_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-mail \ --with-mail_ssl_module \ --add-module=$HOME/ngx_pagespeed-release-1.8.31.4-beta \ --add-module=$HOME/ngx_cache_purge-2.1 \ --add-module=$HOME/headers-more-nginx-module-0.25 [ENTER] Configuration Summary: Configuration summary + using system PCRE library + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/share/nginx" nginx binary file: "/usr/share/nginx/sbin/nginx" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "/var/lib/nginx/body" nginx http proxy temporary files: "/var/lib/nginx/proxy" nginx http fastcgi temporary files: "/var/lib/nginx/fastcgi" nginx http uwsgi temporary files: "/var/lib/nginx/uwsgi" nginx http scgi temporary files: "/var/lib/nginx/scgi" Next step: I cd to root and I check the old Nginx folder locations and double checked the 'make' output to see that they are the same: whereis nginx #Output: nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx NOTE: Not sure about the '/usr/sbin/nginx' - Possible issue??? Next I copy the old /etc/nginx/nginx.conf, /etc/nginx/sites-available/default, /etc/nginx/sites-enabled/default, /etc/init.d/nginx to a text file locally for safe keeping to use in the new Nginx server. Then stop the running Nginx server: service nginx stop , verify it's stopped: service --status-all and the output is: [ - ] nginx To verify that there are two Nginx directories, I cd to: cd nginx* and the output is an error indicating there are two nginx folders - Cool Beans! :) Now Install the new Nginx server: cd nginx-1.7.4 make install # INSTALL OUTPUT ######################################## make -f objs/Makefile install make[1]: Entering directory `/home/walkingfish/nginx-1.7.4' test -d '/usr/share/nginx' || mkdir -p '/usr/share/nginx' test -d '/usr/share/nginx/sbin' || mkdir -p '/usr/share/nginx/sbin' test ! -f '/usr/share/nginx/sbin/nginx' || mv '/usr/share/nginx/sbin/nginx' '/usr/share/nginx/sbin/nginx.old' cp objs/nginx '/usr/share/nginx/sbin/nginx' test -d '/etc/nginx' || mkdir -p '/etc/nginx' cp conf/koi-win '/etc/nginx' cp conf/koi-utf '/etc/nginx' cp conf/win-utf '/etc/nginx' test -f '/etc/nginx/mime.types' || cp conf/mime.types '/etc/nginx' cp conf/mime.types '/etc/nginx/mime.types.default' test -f '/etc/nginx/fastcgi_params' || cp conf/fastcgi_params '/etc/nginx' cp conf/fastcgi_params '/etc/nginx/fastcgi_params.default' test -f '/etc/nginx/fastcgi.conf' || cp conf/fastcgi.conf '/etc/nginx' cp conf/fastcgi.conf '/etc/nginx/fastcgi.conf.default' test -f '/etc/nginx/uwsgi_params' || cp conf/uwsgi_params '/etc/nginx' cp conf/uwsgi_params '/etc/nginx/uwsgi_params.default' test -f '/etc/nginx/scgi_params' || cp conf/scgi_params '/etc/nginx' cp conf/scgi_params '/etc/nginx/scgi_params.default' test -f '/etc/nginx/nginx.conf' || cp conf/nginx.conf '/etc/nginx/nginx.conf' cp conf/nginx.conf '/etc/nginx/nginx.conf.default' test -d '/run' || mkdir -p '/run' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' test -d '/usr/share/nginx/html' || cp -R html '/usr/share/nginx' test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ######################################################### I copy/create the files that I saved earlier to txt files in sites-available, the config, default and ini files then symlink them to sites-enabled, and so on. And now to start the server: service nginx start And this is where s#!+ hits the fan - Nada. I check to see if Nginx is running with service --status-all and its not. Also with nginx -V and its not installed??? I reboot the system too and still nothing. So I am not sure what is wrong here. The ini was copied over from the old server along with all the other config files after deleting the old files. When I opened the new compiled files, the nginx default data was present so I replaced them with my old original data prior to starting the new server for the first time. Also to be safe, I rm /etc/nginx/sites-enabled/default and symlinked with ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default with no errors and I verified that the data was in the sites-enabled/default file. I don't think the server really/fully installed because of the nginx -V result: The program 'nginx' can be found in the following packages: * nginx-core * nginx-extras * nginx-full * nginx-light * nginx-naxsi Try: apt-get install <selected package> Do/should I apt-get install nginx-1.7.4 ?? Or what package do I use being that its a custom package and make install earlier did nothing?? If you need to see the conf files I copied over from the old to the custom server, LMK and I'll post them. Again your help here would be appreciated!

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • Pecl install failed

    - by poru
    Hello, I'm trying to install memcached with pecl but I get every time this error: checking for libmemcached location... configure: error: memcached support requires libmemcached. Use --with-libmemcached-dir= to specify the prefix where libmemcached headers and library are located ERROR: `/tmp/pear/cache/memcached-1.0.1/configure' failed But if I type whereis memcached I get: /usr/bin/memcached /etc/memcached.conf /usr/share/memcached /usr/share/man/man1/memcached.1.gz I'm using a Ubuntu 8.04 server

    Read the article

  • Can't install libpq-dev, ubuntu 10.10 and postgres 9

    - by sheepwalker
    I need some headers from the dev-version of postgres 9, which is contained in libpq-dev, for installing the pg gem, but when I execute: sudo apt-get install libpq-dev I get the result: The following packages have unmet dependencies: libpq-dev : Depends: libpq5 (= 8.4.7-0ubuntu0.10.10) but 9.0.1-1~lucid is to be installed When I tried to remove libpq5 (to reinstall it correctly?), it threatened to remove postgresql-9.0: The following packages will be REMOVED: libpq5 pgadmin3 php5-pgsql postgresql-9.0 postgresql-client-9.0 Does anybody know how to solve this problem? Thanks.

    Read the article

  • Success function not being called when form is submitted. jQuery / validationEngine / PHP form proce

    - by Tom Hartman
    Hi, I've been trying to figure out why the following script's success function isn't running. Everything in my form works perfectly, and the form contents are being emailed correctly, but the success function isn't being called. If anyone could review my code and let me know why my success function isn't being called I would very much appreciate it! Here is the HTML form with notification divs, which are hidden via css: <div id="success" class="notification"> <p>Thank you! Your message has been sent.</p> </div> <div id="failure" class="notification"> <p>Sorry, your message could not be sent.</p> </div> <form id="contact-form" method="post" action="" class="jqtransform"> <label for="name">Name:</label> <input name="name" id="name" type="text" class="validate[required] input" /> <label for="company">Company:</label> <input name="company" id="company" type="text" class="input" /> <label for="phone">Phone:</label> <input name="phone" id="phone" type="text" class="input" /> <label for="email">Email:</label> <input name="email" id="email" type="text" class="validate[required,email] input" /> <div class="sep"></div> <label for="subject">Subject:</label> <input name="subject" id="subject" type="text" class="validate[required] input" /> <div class="clear"></div> <label for="message">Message:</label> <textarea name="message" id="message" class="validate[required]"></textarea> <div id="check-services"> <input type="checkbox" name="services[]" value="Contractor Recommendation" /> <div>Contractor Recommendation</div> <input type="checkbox" name="services[]" value="Proposal Review" /> <div>Proposal Review</div> <input type="checkbox" name="services[]" value="Existing Website Review" /> <div>Existing Website Review</div> <input type="checkbox" name="services[]" value="Work Evaluation" /> <div>Work Evaluation</div> <input type="checkbox" name="services[]" value="Layman Translation" /> <div>Layman Translation</div> <input type="checkbox" name="services[]" value="Project Management" /> <div>Project Management</div> </div> <div class="sep"></div> <input name="submit" id="submit" type="submit" class="button" value="Send" /> <input name="reset" id="reset" type="reset" class="button" value="Clear" onclick="$.validationEngine.closePrompt('.formError',true)" /> </form> Here is the javascript: // CONTACT FORM VALIDATION AND SUBMISSION $(document).ready(function(){ $('#contact-form').validationEngine({ ajaxSubmit: true, ajaxSubmitFile: 'lib/mail.php', scroll: false, success: function(){ $('#success').slideDown(); }, failure: function(){ $('#failure').slideDown(); $('.formError').animate({ marginTop: '+30px' }); } }); }); And here is my PHP mailer script: <?php $name = $_POST['name']; $company = $_POST['company']; $phone = $_POST['phone']; $email = $_POST['email']; $subject = $_POST['subject']; $message = $_POST['message']; $services = $_POST['services']; $to = '[email protected]'; $subject = 'THC - Contact'; $content .= "You received a message from ".$name.".\r\n\n"; if ($company): $content .= "They work for ".$company.".\r\n\n"; endif; $content .= "Here's the message:\r\n\n".$message."\r\n\n"; $content .= "And they are interested in the services below:\r\n\n"; $content .= implode("\r\n",$services); if ($phone): $content .= "\r\n\nYou can reach them at ".$phone."."; else: $content .= "\r\n\nNo phone number was provided."; endif; $headers = "From: ".$name."\r\n"; $headers .= "Reply-To: ".$email."\r\n"; if (mail($to,$subject,$content,$headers)): return true; else: return false; endif; ?>

    Read the article

  • REST API Help in Rails

    - by dannymcc
    Hi Everyone, I am trying to get some information posted using our accountancy package (FreeAgentCentral) using their API via a GEM. http://github.com/aaronrussell/freeagent_api/ I have the following code to get it working (supposedly): Kase Controller def create @kase = Kase.new(params[:kase]) @company = Company.find(params[:kase][:company_id]) @kase = @company.kases.create!(params[:kase]) respond_to do |format| if @kase.save UserMailer.deliver_makeakase("[email protected]", "Highrise", @kase) @kase.create_freeagent_project(current_user) #flash[:notice] = 'Case was successfully created.' flash[:notice] = fading_flash_message("Case was successfully created & sent to Highrise.", 5) format.html { redirect_to(@kase) } format.xml { render :xml => @kase, :status => :created, :location => @kase } else format.html { render :action => "new" } format.xml { render :xml => @kase.errors, :status => :unprocessable_entity } end end end To save you looking through, the important part is: @kase.create_freeagent_project(current_user) Kase Model # FreeAgent API Project Create # Required attribues # :contact_id # :name # :payment_term_in_days # :billing_basis # must be 1, 7, 7.5, or 8 # :budget_units # must be Hours, Days, or Monetary # :status # must be Active or Completed def create_freeagent_project(current_user) p = Freeagent::Project.create( :contact_id => 0, :name => "#{jobno} - #{highrisesubject}", :payment_terms_in_days => 5, :billing_basis => 1, :budget_units => 'Hours', :status => 'Active' ) user = Freeagent::User.find_by_email(current_user.email) Freeagent::Timeslip.create( :project_id => p.id, :user_id => user.id, :hours => 1, :new_task => 'Setup', :dated_on => Time.now ) end lib/freeagent_api.rb require 'rubygems' gem 'activeresource', '< 3.0.0.beta1' require 'active_resource' module Freeagent class << self def authenticate(options) Base.authenticate(options) end end class Error < StandardError; end class Base < ActiveResource::Base def self.authenticate(options) self.site = "https://#{options[:domain]}" self.user = options[:username] self.password = options[:password] end end # Company class Company def self.invoice_timeline InvoiceTimeline.find :all, :from => '/company/invoice_timeline.xml' end def self.tax_timeline TaxTimeline.find :all, :from => '/company/tax_timeline.xml' end end class InvoiceTimeline < Base self.prefix = '/company/' end class TaxTimeline < Base self.prefix = '/company/' end # Contacts class Contact < Base end # Projects class Project < Base def invoices Invoice.find :all, :from => "/projects/#{id}/invoices.xml" end def timeslips Timeslip.find :all, :from => "/projects/#{id}/timeslips.xml" end end # Tasks - Complete class Task < Base self.prefix = '/projects/:project_id/' end # Invoices - Complete class Invoice < Base def mark_as_draft connection.put("/invoices/#{id}/mark_as_draft.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_sent connection.put("/invoices/#{id}/mark_as_sent.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end def mark_as_cancelled connection.put("/invoices/#{id}/mark_as_cancelled.xml", encode, self.class.headers).tap do |response| load_attributes_from_response(response) end end end # Invoice items - Complete class InvoiceItem < Base self.prefix = '/invoices/:invoice_id/' end # Timeslips class Timeslip < Base def self.find(*arguments) scope = arguments.slice!(0) options = arguments.slice!(0) || {} if options[:params] && options[:params][:from] && options[:params][:to] options[:params][:view] = options[:params][:from]+'_'+options[:params][:to] options[:params].delete(:from) options[:params].delete(:to) end case scope when :all then find_every(options) when :first then find_every(options).first when :last then find_every(options).last when :one then find_one(options) else find_single(scope, options) end end end # Users class User < Base self.prefix = '/company/' def self.find_by_email(email) users = User.find :all users.each do |u| u.email == email ? (return u) : next end raise Error, "No user matches that email!" end end end config/initializers/freeagent.rb Freeagent.authenticate({ :domain => 'XXXXX.freeagentcentral.com', :username => '[email protected]', :password => 'XXXXXX' }) The above render the following error when trying to create a new Case and send the details to FreeAgent: ActiveResource::ResourceNotFound in KasesController#create Failed with 404 Not Found and ActiveResource::ResourceNotFound (Failed with 404 Not Found): app/models/kase.rb:56:in `create_freeagent_project' app/controllers/kases_controller.rb:96:in `create' app/controllers/kases_controller.rb:93:in `create' Rendered rescues/_trace (176.5ms) Rendered rescues/_request_and_response (1.1ms) Rendering rescues/layout (internal_server_error) If anyone can shed any light on this problem it would be greatly appreciated! Thanks, Danny

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >