Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 627/2399 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • A controller problem using a base CRUD model

    - by rkj
    In CodeIgniter I'm using a base CRUD My_model, but I have this small problem in my browse-controller.. My $data['posts'] gets all posts from the table called "posts". Though the author in that table is just a user_id, which is why I need to use my "getusername" function (gets the username from a ID - the ID) to grab the username from the users table. Though I don't know how to proceed from here, since it is not just one post. Therefore I need the username to either be a part of the $data['posts'] array or some other smart solution. Anyone who can help me out? function index() { $this->load->model('browse_model'); $data['posts'] = $this->browse_model->get_all(); $data['user'] = $this->browse_model->getusername(XX); $this->load->view('header'); $this->load->view('browse/index', $data); $this->load->view('footer'); }

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Zend_Form validation problem

    - by GrumpyCanuck
    I am having problems getting validation to work for a form built using Zend_Form. The idea is this: I have two dropdown. One is a list of players. The other is a list of free agents who play the same position as the player. I am using an onChange javascript callback to run some Ajax code that replaces the free agent list dropdown with a new one at the position of the player they've selected from the player dropdown. Now, perhaps this is the wrong way, but I built the form by creating an instance of Zend_Form and then creating all these setX methods that add elements to the form. My reasoning was that I wanted to display certain elements in specific places on the page, not just output $this-form on my template. The problem appears to be when I get the form post back, the validator seems to not know about the validation rule I set up for the free agent drop down. Here's some relevant code to look at. I'm a relative ZF n00b so feel free to tell me I am not doing things the ZF way if it leaps out at you. The action in the controller: public function indexAction() { if ($this->getRequest()->isPost()) { $form = new Baseball_Form_Transactions(); if ($form->isValid($this->_request->getPost())) { $data = $this->_request->getPost(); $leagueInfo = Doctrine::getTable('League')->findOneByShortName($data['shortLeagueName'])->toArray(); // Create the request top drop an existing player $transactionInfo = array( 'league_id' => $leagueInfo['id'], 'team_id' => $data['teamId'], 'player_id' => $data['players'], 'type' => 'drop', 'target_team_id' => 0, 'transaction_date' => date('Y-m-d H:m:s') ); $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); // Now we do the request to add a player $transactionInfo['team_id'] = 0; $transactionInfo['player_id'] = $data['freeAgents']; $transactionInfo['target_team_id'] = $data['teamId']; $transactionInfo['type'] = 'add'; $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); $this->_flashMessenger->addMessage('Added transaction'); } } $options = array( 'teamId' => $this->teamId, 'position' => 'C', 'leagueShortName' => $this->league ); $this->transactionForm->setMyPlayers($options); $this->transactionForm->setFreeAgents($options); $this->transactionForm->setTeamId($options); $this->transactionForm->setShortLeagueName($options); $this->view->transactionForm = $this->transactionForm; $this->view->messages = $this->_flashMessenger->getMessages(); $transaction = new Transaction(); $this->view->transactions = $transaction->byTeam($options); } Next we have the form itself public function setMyPlayers($options) { $data = Doctrine::getTable('Team')->find($options['teamId']); $players = array(); foreach ($data->Players->toArray() as $player) { $players[$player['id']] = "{$player['position']} - {$player['first_name']} {$player['last_name']}"; } $playersSelect = new Zend_Form_Element_Select( 'players', array( 'required' => true, 'label' => 'Players', 'multiOptions' => $players, ) ); $this->addElement($playersSelect); } public function setFreeAgents($options) { $q = Doctrine_Query::create() ->select('CONCAT(p.first_name, " ", p.last_name) as full_name, p.id, p.position') ->from('Player p') ->leftJoin('p.Teams t') ->leftJoin('t.League l ON l.short_name = ?', $options['leagueShortName']) ->where('t.id IS NULL') ->andWhere('p.position = ?', $options['position']) ->orderBy('p.last_name'); $q->setHydrationMode(Doctrine_Core::HYDRATE_ARRAY); $data = $q->execute(); $freeAgents = array(); foreach ($data as $player) { $freeAgents[$player['id']] = $player['full_name']; } $freeAgentsSelect = new Zend_Form_Element_Select( 'freeAgents', array( 'label' => 'Free Agents', 'multiOptions' => $freeAgents, 'size' => 15 ) ); $freeAgentsSelect->setRequired(true); $this->addElement($freeAgentsSelect); } public function setShortLeagueName($options) { $shortLeagueNameHidden = new Zend_Form_Element_Hidden( 'shortLeagueName', array('value' => $options['leagueShortName']) ); $this->addElement($shortLeagueNameHidden); } public function setTeamId($options) { $teamIdHidden = new Zend_Form_Element_Hidden( 'teamId', array('value' => $options['teamId']) ); $this->addElement($teamIdHidden); } There is no init or __construct() method in the form. My problem seems simple enough: reject the form contents as invalid if they have not selected someone from the free agent list. Right now, it sails through as valid. I've spent some considerable time searching online for an answer, and haven't been able to find it. Thanks in advance for any help.

    Read the article

  • Does Android support near real time push notification

    - by j pimmel
    I recently learned about the ability of iPhone apps to receive nearly instantaneous notifications to apps. This is provided in the form of push notifications, a bespoke protocol which keeps an always on data connection to the iPhone and messages binary packets to the app, which pops up alerts incredibly quickly, between 0.5 - 5 seconds from server app send to phone app response time. This is sent as data - rather than SMS - in very very small packets charged as part of the data plan not as incoming messages. I would like to know if using Android there is either a similar facility, or whether it's possible to implement something close to this using Android APIs. To clarify I define similar as: Not an SMS message, but some data driven solution As real time as is possible Is scalable - ie: as the server part of a mobile app, I could notify thousands of app instances in seconds I appreciate the app could be pull based, HTTP request/response style, but ideally I don't want to to be polling that heavily just to check for notification .. besides which it's like drip draining the data plan.

    Read the article

  • How can I populate highchart jQuery plugin dynamically from MVC action?

    - by Anders Svensson
    I'm trying out the Highcharts jQuery plugin for creating charts of data in an MVC application. But I need to get the data for the function dynamically from an Action Method. How can I do that? Taking the example from the Highcharts site (http://highcharts.com/documentation/how-to-use): var chart1; // globally available $(document).ready(function() { chart1 = new Highcharts.Chart({ chart: { renderTo: 'chart-container-1', defaultSeriesType: 'bar' }, title: { text: 'Fruit Consumption' }, xAxis: { categories: ['Apples', 'Bananas', 'Oranges'] }, yAxis: { title: { text: 'Fruit eaten' } }, series: [{ name: 'Jane', data: [1, 0, 4] }, { name: 'John', data: [5, 7, 3] }] }); }); How can I get the data in there dynamically from the action method? Someone suggested I might use JSon, but couldn't specify how. If this is the case, I would really appreciate a simple and specific example, because I don't know much about JSon. Any help appreciated!

    Read the article

  • Spring bean creation via deserialization

    - by mdma
    Spring has many different ways of creating beans, but is it possible to create a bean by deserializing a resource? My application has a number of Components, and each manipulates a certain type of data. During test, the data object is instantiated directly and set directly on the component, e.g. component.setData(someDataObject). At runtime, the data is available as a serialized object and read in from the serialized stream by the component. Rather than having each component explicitly deserialize it's data from the stream, it would be more consistent and flexible to have Spring deserialize the data object from a resource. Is there a DeserializerBeanFactory or something similar?

    Read the article

  • Best practice. Do I save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should I encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but I'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, I'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, I just wondered on what you guys thought.

    Read the article

  • how to remove a few lines from a Unicode registry file using batch commands in Windows?

    - by Cosmin
    Hi. I have a program who's generating some data in registry. I save it with "reg export HKCU\Software\ProgramName\Data data.reg" (Unicode format). I need to take it to other computer and import it there so the program from that computer could use the data. But I have to remove some text lines from data.reg. The text lines are easy to find because they contain some strings. Now I'm doing this manually (using Wordpad) every few days but maybe there is another way... Oh and I can't install other programs on these computers (the access is restricted) so I have to use batch/cmd files. What I tried so far: - redirecting the export to "con" but is visual only not in a variable; - using "for /F ..." but this works only with ANSI and removes blank lines. Can somebody please help me...? Thank you.

    Read the article

  • Retrieving XML node from a path specified in an attribute value of another node

    - by Olivier PAYEN
    From this XML source : <?xml version="1.0" encoding="utf-8" ?> <ROOT> <STRUCT> <COL order="1" nodeName="FOO/BAR" colName="Foo Bar" /> <COL order="2" nodeName="FIZZ" colName="Fizz" /> </STRUCT> <DATASET> <DATA> <FIZZ>testFizz</FIZZ> <FOO> <BAR>testBar</BAR> <LIB>testLib</LIB> </FOO> </DATA> <DATA> <FIZZ>testFizz2</FIZZ> <FOO> <BAR>testBar2</BAR> <LIB>testLib2</LIB> </FOO> </DATA> </DATASET> </ROOT> I want to generate this HTML : <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <td>Foo Bar</td> <td>Fizz</td> </tr> <tr> <td>testBar</td> <td>testFizz</td> </tr> <tr> <td>testBar2</td> <td>testFizz2</td> </tr> </table> </body> </html> Here is the XSLT I currently have : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:output method="html" indent="yes"/> <xsl:template match="/ROOT"> <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <!--Generate the table header--> <xsl:apply-templates select="STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> </xsl:apply-templates> </tr> <xsl:apply-templates select="DATASET/DATA" /> </table> </body> </html> </xsl:template> <xsl:template match="COL"> <!--Template for generating the table header--> <td> <xsl:value-of select="@colName"/> </td> </xsl:template> <xsl:template match="DATA"> <xsl:variable name="pos" select="position()" /> <tr> <xsl:for-each select="/ROOT/STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> <xsl:variable name="elementName" select="@nodeName" /> <td> <xsl:value-of select="/ROOT/DATASET/DATA[$pos]/*[name() = $elementName]" /> </td> </xsl:for-each> </tr> </xsl:template> </xsl:stylesheet> It almost works, the problem I have is to retrieve the correct DATA node from the path specified in the "nodeName" attribute value of the STRUCT block.

    Read the article

  • Need help with SQL table structure transformation

    - by Arnis L.
    I need to perform update/insert simultaneously changing structure of incoming data. Think about Shops that have defined work time for each day of the week. Hopefully, this might explain better what I'm trying to achieve: worktimeOrigin table: columns: shop_id day val data: 123 | "monday" | "9:00 AM - 18:00" 123 | "tuesday" | "9:00 AM - 18:00" 123 | "wednesday" | "9:00 AM - 18:00" shop table: columns: id worktimeDestination.id worktimeDestination table: columns: id monday tuesday wednesday My aim: I would like to insert data from worktimeOrigin table into worktimeDestination and specify appropriate worktimeDestination for shop. shop table data: 123 1 (updated) worktimeDestination table data: 1 | "9:00 AM - 18:00" | "9:00 AM - 18:00" | "9:00 AM - 18:00" (inserted) Any ideas how to do that?

    Read the article

  • passing request params from jQuery to jersey service using json

    - by ccduga
    hi, im trying to POST (cross domain) some data to a jersey web service and retrieve a response (a GenericEntity object). The post successfully gets mapped to my jersey endpoint however when i pull the parameters from the request they are empty.. $ .ajax({ type: "POST", dataType: "application/json; charset=utf-8", url: jerseyNewUserUrl+'?jsoncallback=?', data:{'id':id, 'firstname':firstname,'lastname':lastname}, success: function(data, textStatus) { $('#jsonResult').html("some data: " + data.responseMsg); }, error: function ( XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }); this is my jersey endpoint.. @POST @Produces( { "application/x-javascript", MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) @Path("/new") public JSONWithPadding addNewUser(@QueryParam("jsoncallback") @DefaultValue("empty") final String argJsonCallback, @QueryParam("id") final String argID, @QueryParam("firstname") final String argFirstName, @QueryParam("lastname") final String argLastName) is there something missing from my $.ajax call?

    Read the article

  • jquery ajax request is Forbidden in FF 3.6.2 and IE. How to fix (any workaround)?

    - by 1gn1ter
    <script type="text/javascript"> $(function () { $("select#oblast").change(function () { var oblast_id = $("#oblast > option:selected").attr("value"); $("#Rayondiv").hide(); $.ajax({ type: "GET", contentType: "application/json", url: "http://site.com/Regions.aspx/FindGorodByOblastID/", data: 'oblast_id=' + oblast_id, dataType: "json", success: function (data) { if (data.length > 0) { var options = ''; for (p in data) { var gorod = data[p]; options += "<option value='" + gorod.Id + "'>" + gorod.Name + "</option>"; } $("#gorod").removeAttr('disabled').html(options); } else { $("#gorod").attr('disabled', false).html(''); } } }); }); }); </script>

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • How does one capture H.323 voice traffic on a VOIP network?

    - by Chris Holmes
    What I am trying to do is capture the WAV data of a phone conversation on a VOIP network using SharpPCap/PCap.Net. We are using the H.323 recommendation and my understanding is that voice data is located in the RTP packets. However, there is no way to heuristically determine if a UDP packet is a RTP packet, so we have to do more work before we can capture the data. The H.323 recommendation apparently uses a lot of traffic on specific TCP ports to negotiate the call before the WAV data is sent via RTP. However, I am having very little luck determining what data is actually sent on those TCP ports, when it is sent, what the packets look like, how to handle it, etc. If anyone has any information on how to go about this I'd really appreciate it. My Google-Fu seems to be failing me on this one.

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • JQuery.ajax success function returns empty

    - by viatropos
    I have a very basic AJAX function in JQuery: $.ajax({ url: "http://www.google.com", dataType: "html", success: function(data) { alert(data); } }); But the data is always an empty string, no matter what url I go to... Why is that? I am running this locally at http://localhost:3000, and am using JQuery 1.4.2. If I make a local response, however, like this: $.ajax({ url: "http://localhost:3000/test", dataType: "html", success: function(data) { alert(data); } }); ...it returns the html page at that address. What am I missing here?

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • XmlDocument.InnerXml is null, but InnerText is not

    - by Adam Neal
    I'm using XmlDocument and XmlElement to build a simple (but large) XML document that looks something like: <Widgets> <Widget> <Stuff>foo</Stuff> <MoreStuff>bar</MoreStuff>...lots more child nodes </Widget> <Widget>...lots more Widget nodes </Widgets> My problem is that when I'm done building the XML, the XmlDocument.InnerXml is null, but the InnerText still shows all the text of all the child nodes. Has anyone ever seen a problem like this before? What kind of input data would cause these symptoms? I expected the XmlDocument to just throw an exception if it was given bad data. Note: I'm pretty sure this is related to the input data as I can only reproduce it against certain data sets. I also tried escaping the data with SecurityElement.Escape but it made no difference.

    Read the article

  • Excel > Microsoft Query > SQL Server > Multiple Parameters

    - by pojomx
    Hi, Im relatively new to sql server and excel/microsoft query, I have a query like this Select ...[data]...B1.b,B2.b,B3.b From TABLEA Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate] and [enddate] ) as B1 Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate-1week] and [enddate] ) as B2 Inner join ( SELECT ---[data]...sum(...) as b From TABLEB WHERE Date between [startdate-2weeks] and [enddate] ) as B3 Where Date between [startdate] and [enddate] It works, when i introduce the dates manually, but i need them to be "Dynamic" (introduced from excel) but, when I put the "?" (for parameters) on all the dates, it throws an error. "Invalid Parameter Number" :D How can i make this work, within excel? Im using SQL Server and Microsoft Query Connection Data.

    Read the article

  • FileReference and HttpService Browse Image Modify it then Upload it

    - by user177787
    Hello, I am trying to do an image uploader, user can: - browse local file with button.browse - select one and save it as a FileReference. - then we do FileReference.load() then bind the data to our image control. - after we make a rotation on it and change the data of image. - and to finish we upload it to a server. To change the data of image i get the matrix of the displayed image and transform it then i re-use the new matrix and bind it to my old image: private function TurnImage():void { //Turn it var m:Matrix = _img.transform.matrix; rotateImage(m); _img.transform.matrix = m; } Now the mater is that i really don't know how to send the data as a file to my server cause its not stored in the FileReference and data inside FileReference is readOnly so we can't change it or create a new, so i can't use .upload();. Then i tried HttpService.send but i can't figure out how you send a file and not a mxml.

    Read the article

  • Perl script to print out cars model and car color

    - by Gary Liggons
    I am tying to create a perl script to printout car models and colors, and the data is below. I want to know if there is anyway to make the car model heading a field so that I can print it any time I want to? the data below is a csv file. the way I want the data to look on a report is below as well This is how the data looks* Chevy blue,1978,Washington brown,1989,Dallas black,2001,Queens white,2003,Manhattan Toyota red,2003,Bronx green,2004,Queens brown,2002,Brooklyn black,1999,Harlem ****This is how I am trying to get the data to look in a report**** Car Model:Toyota Color:Red Year:2002 City: Queens

    Read the article

  • Importing an Excel WorkSheet into a Datatable

    - by Nick LaMarca
    I have been asked to create import functionality in my application. I am getting an excel worksheet as input. The worksheet has column headers followed by data. The users want to simply select an xls file from their system, click upload and the tool deletes the table in the database and adds this new data. I thought the best way would be too bring the data into a datatable object and do a foeach for every row in the datatable insert row by row into the db. My question is what can anyone give me code to open an excel file, know what line the data starts on in the file, and import the data into a datable object?

    Read the article

  • Error opening streamreader because file is used by another process

    - by Geri Langlois
    I am developing an application to read an excel spreadsheet, validate the data and then map it to a sql table. The process is to read the file via a streamreader, validate the data, manually make corrections to the excel spreadsheet, validate again -- repeat this process until all data validates. If the excel spreadsheet is open, then when I attempt to read the data via a streamreader I get an error, "The process cannot access the file ... because it is being used by another process." Is there a way to remove the lock or otherwise read the data into a streamreader without having to open and close excel each time?

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >