Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 13/484 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Portuguese Pointer. She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.     Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'Asbsa_hzfHl69XF3wxBd_WbW0dLNTRUH3ZHQG9qcV5EFRLuWEaOP1hjWdZ0A0P17',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });             // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; }   Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }   Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave. Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'XXXXXXXXXXXXXXXXXXX',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });           // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; } Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }  Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • News feed APIs for general news

    - by dassouki
    I'm building a database + tool that scours news feeds for a certain term. For example "food poisoning from nuts". I want to scour social media sites, news sites, major news aggregators, etc... for that term. Question 1: What are some of the news aggregator APIs out there? Question 2: How Would you go about coding and receiving only the latest news from the API?

    Read the article

  • About unit testing a function in the zend framework and unit testing in general

    - by sanders
    Hello people, I am diving into the world of unit testing. And i am sort of lost. I learned today that unit testing is testing if a function works. I wanted to test the following function: public function getEventById($id) { return $this->getResource('Event')->getEventById($id); } So i wanted to test this function as follows: public function test_Event_Get_Event_By_Id_Returns_Event_Item() { $p = $this->_model->getEventById(42); $this->assertEquals(42, EventManager_Resource_Event_Item_Interface); $this->assertType('EventManager_Resource_Event_Item_Interface', $p); } But then I got the error: 1) EventTest::test_Event_Get_Event_By_Id_Returns_Event_Item Zend_Db_Table_Exception: No adapter found for EventManager_Resource_Event /home/user/Public/ZendFramework-1.10.1/library/SF/Model/Abstract.php:101 /var/www/nrka2/application/modules/eventManager/models/Event.php:25 But then someone told me that i am currently unit testing and not doing an integration test. So i figured that i have to test the function getEventById on a different way. But I don't understand how. What this function does it just cals a resource and returns the event by id.

    Read the article

  • Just a general THANK YOU to EVERYONE. [closed]

    - by ajax81
    Hi All, I really just wanted to thank everybody that participates in the stackoverflow community. On more than one occasion, your minds have saved me from soul-eating project managers and career-ending deadlines. The commendable awareness exhibited by contributors that their answers are studied/used as learning material by millions of developers all over the world has created a regulated trust that seemingly keeps the nonsense (and egos) at the bottom of the barrel and out of the way. As an up-and-coming developer with so much to learn, I am grateful for each and every one of their patient contributions. I wish I could come up with a catchy/funny sign-off that makes everybody feel good, but I lack the funny bone that so many of the people on this site seem to have been born with. Instead, I can only leave my gratitude and a promise that as long as the community stays this great, I'll stay an avid reader...and one day be experienced enough to carry the torch of contribution. Sincerely, Daniel the Intern

    Read the article

  • Unit testing a database connection and general questions on database-dependent code and unit testing

    - by dotnetdev
    Hi, If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way? From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method? Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset. EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not? THnaks

    Read the article

  • What is the purpose of both target API and minSDK

    - by Scott Ferguson
    Can somebody explain to me the difference between the project target and the minimum SDK? I want my app to run on Donut devices, and the APK I built with a target of 7 worked just fine. When I set an explicit minimum SDK in the Android manifest of 4 (1.6) the compiler bitched at me that the target exceeded the minimum. I reset the target to 4 only to see what would happen, and now I've got compiler errors. An example is the START_NOT_STICKY constant in android.app.Service. It doesn't exist in API level 4, but does exist in API level 7. This is also the case with Service.onStartCommand(). In API level 7 you need to explicity override this method, whereas in API level 4 you don't. So why does the app work in 1.6 despite all this? How could 1.6 know how to use SERVICE_NOT_STICKY when the associated API level doesn't know about it?

    Read the article

  • What is the purpose of WCF reliable session?

    - by bsnote
    The documentation around this topic is poor. I use WCF services with NetTcpBinding hosted in Windows service. The problem is that a session is dropped when it is inactive for some time. What I need is the session which is always alive. Is WCF reliable session something that can help? Or I can just play with timeout settings?

    Read the article

  • General Web Programming/designing Question: ?

    - by Prasad
    hi, I have been in web programming for 2 years (Self taught - a biology researcher by profession). I designed a small wiki with needed functionalities and a scientific RTE - ofcourse lot is expected. I used mootools framework and AJAX extensively. I was always curious when ever I saw the query strings passed from URL. Long encrypted query string directly getting passed to the server. Especially Google's design is such. I think this is the start of providing a Web Service to a client - I guess. Now, my question is : is this a special, highly professional, efficient / advanced web design technique to communicate queries via the URL ? I always felt that direct URL based communication is faster. I tried my bit and could send a query through the URL directly. here is the link: http://sgwiki.sdsc.edu/getSGMPage.php?8 By this , the client can directly link to the desired page instead of searching and / or can automate. There are many possibilities. The next request: Can I be pointed to such technique of web programming? oops: I am sorry, If I have not been able to convey my request clearly. Prasad.

    Read the article

  • Excluding a script from the general UrlRewrite rules

    - by Steven
    Hi, I have following rewrite rules for a website: RewriteEngine On # Stop reading config files RewriteCond %{REQUEST_FILENAME} .*/web.config$ [NC,OR] RewriteCond %{REQUEST_FILENAME} .*/\.htaccess$ [NC] RewriteRule ^(.+)$ - [F] # Rewrite to url RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !^(/bilder_losning/|/bilder/|/gfx/|/js/|/css/|/doc/).* RewriteRule ^(.+)$ index.cfm?smartLinkKey=%{REQUEST_URI} [L] Now I have to exclude a script including its eventually querystrings from the above rules, so that I can access and execute it on the normal way, at the moment the whole url is being ignored and forwarded to the index page. I need to have access to the script shoplink.cfm in the root which takes variables tduid and url (shoplink.cfm?tduid=1&url=) I have tried to resolve it using this: # maybe?: RewriteRule !(^/shoplink.cfm [QSA] but to be honest, I have not much of a clue of urlrewriting and have no idea what I am supposed to write. I just know that above will generate a nice 500 error. I have been looking around a lot on stackoverflow and other websites on the same subject, but all I see is people trying to exclude directories, not files. In the worst case I could add the script to a seperate directory and exclude the directory from the rewriterules, but rather not since the script should really remain in the root. Just also tried: RewriteRule ^/shoplink.cfm$ $0 [L] but that didn't do anything either. Anyone who can help me out on this subject? Thanks in advance. Steven Esser ColdFusion programmer

    Read the article

  • Union struct produces garbage and general question about struct nomenclature

    - by SoulBeaver
    I read about unions the other day( today ) and tried the sample functions that came with them. Easy enough, but the result was clear and utter garbage. The first example is: union Test { int Int; struct { char byte1; char byte2; char byte3; char byte4; } Bytes; }; where an int is assumed to have 32 bits. After I set a value Test t; t.Int = 7; and then cout cout << t.Bytes.byte1 << etc... the individual bytes, there is nothing displayed, but my computer beeps. Which is fairly odd I guess. The second example gave me even worse results. union SwitchEndian { unsigned short word; struct { unsigned char hi; unsigned char lo; } data; } Switcher; Looks a little wonky in my opinion. Anyway, from the description it says, this should automatically store the result in a high/little endian format when I set the value like Switcher.word = 7656; and calling with cout << Switcher.data.hi << endl The result of this were symbols not even defined in the ASCII chart. Not sure why those are showing up. Finally, I had an error when I tried correcting the example by, instead of placing Bytes at the end of the struct, positioning it right next to it. So instead of struct {} Bytes; I wanted to write struct Bytes {}; This tossed me a big ol' error. What's the difference between these? Since C++ cannot have unnamed structs it seemed, at the time, pretty obvious that the Bytes positioned at the beginning and at the end are the things that name it. Except no, that's not the entire answer I guess. What is it then?

    Read the article

  • Where to Store the Protection Trial Info for Software Protection Purpose

    - by Peter Lee
    It might be duplicate with other questions, but I swear that I googled a lot and search at StackOverflow.com a lot, and I cannot find the answer to my question: In a C#.Net application, where to store the protection trial info, such as Expiration Date, Number of Used Times? I understand that, all kinds of Software Protection strategies can be cracked by a sophiscated hacker (because they can almost always get around the expiration checking step). But what I'm now going to do is just to protect it in a reasonable manner that a "common"/"advanced" user cannot screw it up. OK, in order to proof that I have googled and searched a lot at StackOverflow.com, I'm listing all the possible strategies I got: 1. Registry Entry First, some users might not have the access to even read the Registry table. Second, if we put the Protection Trial Info in a Registry Entry, the user can always find it out where it is by comparing the differences before and after the software installation. They can just simply change it. OK, you might say that we should encrypt the Protection Trial Info, yes we can do that. But what if the user just change their system date before installing? OK, you might say that we should also put a last-used date, if something is wrong, the last-used date could work as a protection guide. But what if the user just uninstall the software and delete all Registry Entries related to this software, and then reinstall the software? I have no idea on how to deal with this. Please help. A Plain File First, there are some places to put the plain file: 2.a) a simple XML file under software installation path 2.b) configuration file Again, the user can just uninstall the software and remove these plain file(s), and reinstall the software. - The Software Itself If we put the protection trial info (Expiration Date, we cannot put Number of Used Times) in the software itself, it is still susceptible to the cases I mentioned above. Furthermore, it's not even cool to do so. - A Trial Product-Key It works like a licensing process, that is, we put the Trial info into an RSA-signed string. However, it requires too many steps for a user to have a try of using the software (they might lose patience): 4.a) The user downloads the software; 4.b) The user sends an email to request a Trial Product-Key by providing user name (or email) or hardware info; 4.c) The server receives the request, RSA-signs it and send back to the user; 4.d) The user can now use it under the condition of (Expiration Date & Number of Used Times). Now, the server has a record of the user's username or hardware info, so the user will be rejected to request a second trial. Is it legal to collection hardware info? In a word, the user has to do one more extra step (request a Trial Product Key) just for having a try of using the software, which is not cool (thinking myself as a user). NOTE: This question is not about the Licensing, instead, it's about where to store the TRIAL info. After the trial expires, the user should ask for a license (CD-Key/Product-Key). I'm going to use RSA signature (bound to User Hardware)

    Read the article

  • codeigniter and OOP general question regarding calling functions, and the parent constuctor

    - by thrice801
    Ok so I have some gaps in my understanding of PHP OOP, classes and functions specifically, this whole constructor class deal. I use both Zend and CI but right now Im trying to figure this out in CI as it is less complicated. So all Im trying to do is understand how to call a function from a view page in code igniter. I understand that might go against MVC but Im working with an api and search results not from my database, so basically, I want to define a function in my class that I am able to call in one of my view pages.. and I keep getting "Fatal error: Call to undefined function functionname" error no matter what I try. I thought I just had to declare public function testing() { echo "testing testing 123; } but calling that from the view I get that error. Then I read something about having to go parent::Controller(); in the index of the class where the testing function also resides? But that didnt work either. Anyways, ya, can someone explain what I need to do in order to call the "testing()" function on one of my view pages? and clarification on the constructor class and what exactly parent::Controller() even does, would be much appreciated as well.

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • What's the purpose of noncreatable coclasses in IDL?

    - by sharptooth
    What is the reason for declaring noncreatable coclasses like the following in IDL? [ uuid(uuidhere), noncreatable ] coclass CoClass { [default] interface ICoClass; }; I mean such class will not be registered to COM anyway. What's the reason to mention it in the IDL file and in the type library produced by compiling that IDL file?

    Read the article

  • General Question About WPF Control Behavior and using Invoke

    - by Phil Sandler
    I have been putting off activity on SO because my current reputation is "1337". :) This is a question of "why" and not "how". By default, it seems that WPF does not set focus to the first control in a window when it's opening. In addition, when a textbox gets focus, by default it does not have it's existing text selected. So basically when I open a window, I want focus on the first control of the window, and if that control is a textbox, I want it's existing text (if any) to be selected. I found some tips online to accomplish each of these behaviors, and combined them. The code below, which I placed in the constructor of my window, is what I came up with: Loaded += (sender, e) => { MoveFocus(new TraversalRequest(FocusNavigationDirection.Next)); var textBox = FocusManager.GetFocusedElement(this) as TextBox; if (textBox != null) { Action select = textBox.SelectAll; //for some reason this doesn't work without using invoke. Dispatcher.Invoke(DispatcherPriority.Loaded, select); } }; So, my question. Why does the above not work without using Dispatcher.Invoke? Is something built into the behavior of the window (or textbox) cause the selected text to be de-selected post-loading? Maybe related, maybe not--another example of where I had to use Dispatcher.Invoke to control the behavior of a form: http://stackoverflow.com/questions/2602979/wpf-focus-in-tab-control-content-when-new-tab-is-created

    Read the article

  • Purpose of Zope Interfaces?

    - by Nikwin
    I have started using Zope interfaces in my code, and as of now, they are really only documentation. I use them to specify what attributes the class should possess, explicitly implement them in the appropriate classes and explicitly check for them where I expect one. This is fine, but I would like them to do more if possible, such as actually verify that the class has implemented the interface, instead of just verifying that I have said that the class implements the interface. I have read the zope wiki a couple of times, but still cannot see much more use for interfaces than what I am currently doing. So, my question is what else can you use these interfaces for, and how do you use them for more.

    Read the article

  • General question about DirectShow.NET, DirectShow and Windows Media Format

    - by Paul Andrews
    I searched and googled for an answer but couldn't find one. Basically I'm developing a webcam/audio streaming application which should capture audio and video from a pc (usb webcam/microphone) and send them to a receiving server. What the server will do with that it's another story and phase two (which I'm skipping for now) I wrote some code using DirectShow and Windows Media Format and it worked great for capture audio/video and sending them to another client, but there's a major problem: latency. Everywhere in the internet everyone gave me the same answer: "sorry dude but media format isn't for video conferencing, their codecs have too high latency". I thought I could skip the .wmv problems but seems like it's not possible to do... this road ends here then. So I saw a few examples with DirectShow.NET which were faster for both audio and video.. my question is: how come that DirectShow.NET is faster and better for video/audio conferencing? Shouldn't it be just a .NET porting of C++'s DirectShow? Am I missing something? I'm a bit confused at this point

    Read the article

  • General Drools Question

    - by El Guapo
    For the last few months my company has been using a product from a company called Informatica (previously AgentLogic) called RulePoint. This product has proven itself very easy to use with a well-developed and easy-to-use SDK for customization. The way we use the product for CEP is fairly trivial, we have 2 sources which we monitor for our rule data, the first being a JMS Queue, the second being a Jabber IM account. The product runs on any java-based application server (WebLogic, Tomcat, etc) and runs just about flawlessly. Last week my boss says, "Hey, I've heard that we may be able to do the same thing we are doing with RulePoint with an open-source product called Drools. Check it out and let me know what you think." I've heard of people using Drools for flow-based operations (validation, etc), however, I've never heard of anyone using their CEP product (Fusion) in practice. So, being the diligent worker, I have undertaken this task. I've downloaded all the files (version 5.0) and accompanying documentation and have started to read. I've read through just about all the docs and run most of the examples, but I still don't really see HOW drools works for CEP. While there are examples for using Data (or Facts, I guess) from JMS, I don't see how this thing stays "running", continuously monitoring a queue until the application is actually stopped. RulePoint pretty must just sits and listens, however, Drools seems to not. I could probably write a full-blown command-line application for our needs, however, I was hoping to leverage some of the benefits of using a application server provides. I guess I'm looking for some good tutorials or an example of how someone is using Drools and CEP in production. Thanks in advanced for any information, advice you may be able to provide.

    Read the article

  • Returning errors from AMFPHP on purpose.

    - by Morieris
    When using flash remoting with amfphp, what can I write in php that will trigger the 'status' method that I set up in my Responder in Flash? Or more generally, how can I determine if the service call has failed? The ideal solution for me would be to throw some exception in php serverside, and catch that exception in flash clientside... How do other people handle server errors with flash remoting? var responder = new Responder( function() { trace("some normal execution finished successfully. this is fine."); }, function(e) { trace("how do I make this trigger when my server tells me something bad happened?"); } ); myService = new NetConnection; myService.connect("http://localhost:88/amfphp/gateway.php"); myService.call("someclass.someservice", responder);

    Read the article

  • Purpose of lua_lock and lua_unlock?

    - by anon
    What is the point of lua_lock and lua_unlock? The following implies it's important: LUA_API void lua_gettable (lua_State *L, int idx) { StkId t; lua_lock(L); t = index2adr(L, idx); api_checkvalidindex(L, t); luaV_gettable(L, t, L->top - 1, L->top - 1); lua_unlock(L); } LUA_API void lua_getfield (lua_State *L, int idx, const char *k) { StkId t; TValue key; lua_lock(L); t = index2adr(L, idx); api_checkvalidindex(L, t); setsvalue(L, &key, luaS_new(L, k)); luaV_gettable(L, t, &key, L->top); api_incr_top(L); lua_unlock(L); } The following implies it does nothing: #define lua_lock(L) ((void) 0) #define lua_unlock(L) ((void) 0) Please enlighten.

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >