Search Results

Search found 30575 results on 1223 pages for 'number systems'.

Page 174/1223 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Regex | validation error

    - by MMRUser
    I'm trying to validate a USA mobile number, since I'm using pre-built javascript validation library I just replaced this regex validation with the previous one which comes with the validation library. previous validation regex: "telephone":{ "regex":"/^[0-9\-\(\)\ ]{10,10}$/", "alertText":"* Invalid phone number"}, This works like 2126661234 but not in USA standard. After I changed: "telephone":{ "regex":"/^[2-9]\d{2}-\d{3}-\d{4}$/", "alertText":"* Invalid phone number"}, Now every entry I get an error even if I enter 212-666-1234 I really don't know what is the wrong, so I'm expecting some help.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • A bug in grpah while using google visualization API on IE

    - by gili
    Hi, i'm using google visulization API to build a line chart grapgh. it works fine on FF and Chrome but i'm having problems on IE7: the problem is that the scailing of the x-axis (string) and y-axis (integer) is all wrong. both axis have the same values for some reason, but naturally those valuse are wrong. my code is the following one: var data = new google.visualization.DataTable(); data.addColumn('string', 'Date'); data.addColumn('number', '??????? ???? ??????'); data.addColumn('number', '??????? ??????'); data.addColumn('number', '??????'); data.addColumn('number', '???????'); data.addColumn('number', '???????'); var n = userRightGuessArray.length; data.addRows(userRightGuessArray.length); data.setCell(0, 0, '?? ?????? ??'); data.setCell(0, 1, 0); data.setCell(0, 2, 0); data.setCell(0, 3, 0); data.setCell(0, 4, 0); data.setCell(0, 5, 0); for(var t = 1 ; t // Create and draw the visualization. var chart = new google.visualization.ImageLineChart(document.getElementById('line_div')); chart.draw(data, {width: 400,legend: 'top'/showValueLabels:false/}); thank you for your help, Gili

    Read the article

  • Big problem with Dijkstra algorithm in a linked list graph implementation

    - by Nazgulled
    Hi, I have my graph implemented with linked lists, for both vertices and edges and that is becoming an issue for the Dijkstra algorithm. As I said on a previous question, I'm converting this code that uses an adjacency matrix to work with my graph implementation. The problem is that when I find the minimum value I get an array index. This index would have match the vertex index if the graph vertexes were stored in an array instead. And the access to the vertex would be constant. I don't have time to change my graph implementation, but I do have an hash table, indexed by a unique number (but one that does not start at 0, it's like 100090000) which is the problem I'm having. Whenever I need, I use the modulo operator to get a number between 0 and the total number of vertices. This works fine for when I need an array index from the number, but when I need the number from the array index (to access the calculated minimum distance vertex in constant time), not so much. I tried to search for how to inverse the modulo operation, like, 100090000 mod 18000 = 10000 and, 10000 invmod 18000 = 100090000 but couldn't find a way to do it. My next alternative is to build some sort of reference array where, in the example above, arr[10000] = 100090000. That would fix the problem, but would require to loop the whole graph one more time. Do I have any better/easier solution with my current graph implementation?

    Read the article

  • Graph spacing algorithm

    - by David
    Hi, I am looking for an algorithm that would be useful for determining x y coordinates for a number objects to display on screen. Each object can be related to another object and there can be any number of relationships and there can be any number of these objects. There is no restriction on the overall size of area on which to display these object. I am writing this in php and would be looking to store the coordinates in an array.

    Read the article

  • How to write the Visitor Pattern for Abstract Syntax Tree in Python?

    - by bodacydo
    My collegue suggested me to write a visitor pattern to navigate the AST. Can anyone tell me more how would I start writing it? As far as I understand, each Node in AST would have visit() method (?) that would somehow get called (from where?). That about concludes my understanding. To simplify everything, suppose I have nodes Root, Expression, Number, Op and the tree looks like this: Root | Op(+) / \ / \ Number(5) \ Op(*) / \ / \ / \ Number(2) Number(444) Can anyone think of how the visitor pattern would visit this tree to produce output: 5 + 2 * 444 Thanks, Boda Cydo.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • How to read data from keyboard and store it in a file, shellscript

    - by Sunil Kumar Sahoo
    Hi I have a file try.SPEC. This file contains a word "Version: 1.0.0.1" . Now I want to write a shell script which will read the version number from keyboard and insert in the file. eg- if the user enters the version number as 2.1.1.1 then the file will have Version: 2.1.1.1" instead of "Version: 1.0.0.1". like this i want that i must be able to change irrespective of knowing what is the version number present in the spec file Thanks Sunil Kumar Sahoo

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • reset, Tweener, AS3

    - by VideoDnd
    How do I reset my numbers after they count? I want something like an onComplete function. DESCRIPTION My animation advances 120 pixels from it's current position, then flys off the stage. It was looping, and would yoyo to the bottom before advancing. I don't want my numbers yoyoing or flying off the stage. My numbers must move 120 pixels forward each count, then return. NumbersView.as 'the code works, but in a messed up way as described' package { import flash.display.DisplayObject; import flash.display.MovieClip; import flash.utils.Dictionary; import flash.events.Event; import caurina.transitions.Tweener; public class NumbersView extends MovieClip { private var _listItems:Array; private var previousNums:Array; private const numHeight:int = 120; public function NumbersView() { _listItems = new Array(); previousNums = new Array(); //Tweener.init(); var item:NumberImage; for (var i:Number = 0; i < 9; i++) { item = new NumberImage(); addChild(item); item.x = i * item.width; _listItems.push(item); } } public function setTime($number:String):void { var nums:Array = $number.split(""); //trace("$number = " + $number); for (var i:Number = 0; i < nums.length; i++) { if (nums[i] == previousNums[i]) continue; Tweener.removeTweens(_listItems[i]); //newY:int = -numHeight; var newY:int = int(nums[i]) * -numHeight; trace("newY = " + newY); trace("currY = " + _listItems[i].y); /*----------------------PROBLEM AREA, RIGHT HERE------------------------*/ //if (_listItems[i].y < 0) _listItems[i].y = numHeight;// //Tweener.addTween(_listItems[i], { y:newY, time:3 } );// Tweener.addTween(_listItems[i], { y:_listItems[i].y+newY, time:3 } );// } previousNums = nums; } } } Tweener Example http://hosted.zeh.com.br/tweener/docs/en-us/parameters/onComplete.html

    Read the article

  • create a random sequence, skip to any part of the sequence

    - by Michael Xu
    Hi everyone, In Linux. There is an srand() function, where you supply a seed and it will guarantee the same sequence of pseudorandom numbers in subsequent calls to the random() function afterwards. Lets say, I want to store this pseudo random sequence by remembering this seed value. Furthermore, let's say I want the 100 thousandth number in this pseudo random sequence later. One way, would be to supply the seed number using srand(), and then calling random() 100 thousand times, and remembering this number. Is there a better way of skipping all 99,999 other numbers in the pseudo random list and directly getting the 100 thousandth number in the list. thanks, m

    Read the article

  • diophantine equation

    - by krishna chaitanya
    Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n" in python

    Read the article

  • Does weak typing offer any advantages?

    - by sub
    Don't confuse this with static vs. dynamic typing! You all know JavaScripts/PHPs infamous type systems: PHP example: echo "123abc"+2; // 125 - the reason for this is explained // in the PHP docs but still: This hurts echo "4"+1; // 5 - Oh please echo "ABC"*5; // 0 - WTF // That's too much, seriously now. // This here might be actually a use for weak typing, but no - // it has to output garbage. JavaScript example: // A good old JavaScript, maybe you'll do better? alert("4"+1); // 51 - Oh come on. alert("abc"*3); // NaN - What the... // Have your creators ever heard of the word "consistence"? Python example: # Python's type system is actually a mix # It spits errors on senseless things like the first example below AND # allows intelligent actions like the second example. >>> print("abc"+1) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> print("abc"+1) TypeError: Can't convert 'int' object to str implicitly >>> print("abc"*5) abcabcabcabcabc Ruby example: puts 4+"1" // Type error - as supposed puts "abc"*4 // abcabcabcabc - makes sense After these examples it should be clear that PHP/JavaScript probably have the most inconsistent type systems out there. This is a fact and really not subjective. Now when having a closer look at the type systems of Ruby and Python it seems like they are having a much more intelligent and consistent type system. I think these examples weren't really necessary as we all know that PHP/JavaScript have a weak and Python/Ruby have a strong type system. I just wanted to mention why I'm asking this. Now I have two questions: When looking at those examples, what are the advantages of PHPs and JavaScripts type systems? I can only find downsides: They are inconsistent and I think we know that this is not good Types conversions are hardly controllable Bugs are more likely to happen and much harder to spot Do you prefer one of the both systems? Why? Personally I have worked with PHP, JavaScript and Python so far and must say that Pythons type system has really only advantages over PHPs and JavaScripts. Does anybody here not think so? Why then?

    Read the article

  • Sql or VB for Access

    - by vijay
    I Have a table in Access as below SI Number Time 1.14172E+20 13:30:35 1244066650 18:58:48 1244066650 19:03:12 1244066650 19:05:50 01724656007_dsl 22:15:20 01724656007_dsl 22:18:00 01724656007_dsl 22:24:28 1141530407 10:27:49 1141530407 10:29:13 And require output in the same table is SI Number Time Diff 1.14172E+20 13:30:35 1244066650 18:58:48 1244066650 19:03:12 0:04:24 1244066650 19:05:50 0:02:38 01724656007_dsl 22:15:20 01724656007_dsl 22:18:00 0:02:40 01724656007_dsl 22:24:28 0:06:28 1141530407 10:27:49 1141530407 10:29:13 0:01:24 I require as if 1st record in SI Number column is equals to 2nd record than record 2 of time column -record 1 of time column in Diff column record 2 record 1 will remain blank Urgent help required Vijay

    Read the article

  • What is a Bias Value of Floating Point Numbers?

    - by mudge
    In learning how floating point numbers are represented in computers I have come across the term "bias value" that I do not quite understand. The bias value in floating point numbers has to do with the negative and positiveness of the exponent part of a floating point number. The bias value of a floating point number is 127 which means that 127 is always added to the exponent part of a floating point number. How does doing this help determine if the exponent is negative or positive or not?

    Read the article

  • Django select distinct sum

    - by yoshi
    I have the following (greatly simplified) table structure: Order: order_number = CharField order_invoice_number = CharField order_invoice_value = CharField An invoice number can be identical on more than one order (order O1 has invoice number I1, order O2 has invoice number I1, etc.). All the orders with the same invoice number have the same invoice value. For example: Order no. Invoice no. Value O1 I1 200 O2 I1 200 O3 I1 200 04 I2 50 05 I2 100 What I am trying to do is do a sum over all the invoice values, but don't add the invoices with the same number more than once. The sum for the above items would be: 200+50+100. I tried doing this using s = orders.values('order_invoice_id').annotate(total=Sum('order_invoice_value')).order_by() and s = orders.values('order_invoice_id').order_by().annotate(total=Sum('order_invoice_value')) but I didn't get the desired result. I tried a few different solutions from similar questions around here but I couldn't get the desired result. I can't figure out what I'm doing wrong and what I actually should do to get a sum that uses each invoice value just once.

    Read the article

  • Converting to Base 10

    - by incrediman
    Hi, Let's say I have a string or array which represents a number in base N, N1, where N is a power of 2. Assume the number being represented is larger than the system can handle as an actual number (an int or a double etc). How can I convert that to a decimal string? I'm open to a solution for any base N which satisfies the above criteria (binary, hex, ...).

    Read the article

  • How do I work out IEEE 754 64-bit Floating Point Double Precision?

    - by yousef gassar
    enter code herehello i have done it in 32 but i could dont do it in 62bits please i need help I am stuck on this question and need help. I don't know how to work it out. This is the question. Below are two numbers represented in IEEE 754 64-bit Floating Point Double Precision, the bias of the signed exponent is -1023. Any particular real number ‘N’ represented in 64-bit form (i.e. with the following bit fields; 1-bit Sign, 11-bit Exponent, 52-bit Fraction) can be expressed in the form ±1.F2 × 2X by substituting the bit-field values using formula (IV.I): N = (-1) S × 1.F2 × 2(E – 1023) for 0 < E < 2047.........................….(IV.I) Where N= the number represented, S=Sign bit-value, E=Exponent=X +1023, F=Fraction or Mantissa are the values in the 1, 11 and 52-bit fields respectively in the IEEE 754 64-bit FP representation. Using formula (IV.I), express the 64-bit FP representation of each number as: (i) A binary number of the form:- ±1.F2 × 2X (ii) A decimal number of the form:- ±0.F10 × 10Y {limit F10 to 10 decimal places} Sign 0 1 Exponent 1000 0001 001 11 Fraction 1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 52 Sign 1 1 Exponent 1000 0000 000 11 Fraction 1001 0010 0001 1111 1011 0101 0100 0100 0100 0010 1101 0001 1000 52 I know I have to use the formula for each of the these but how do I work it out? Is it like this? N = (-1) S × 1.F2 × 2(E – 1023) = 1 x 1.1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 x 1000 0001 00111 (-1023)?

    Read the article

  • Validation Summary for Collections

    - by Myster
    Hi All, EDIT: upgraded this question to MVC 2.0 With asp.net MVC 2.0 is there an existing method of creating Validation Summary that makes sense for models containing collections? If not I can create my own validation summary Example Model: public class GroupDetailsViewModel { public string GroupName { get; set; } public int NumberOfPeople { get; set; } public List<Person> People{ get; set; } } public class Person { [Required(ErrorMessage = "Please enter your Email Address")] [RegularExpression(@"^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$", ErrorMessage = "Please enter a valid Email Address")] public string EmailAddress { get; set; } [Required(ErrorMessage = "Please enter your Phone Number")] public string Phone { get; set; } [Required(ErrorMessage = "Please enter your First Name")] public string FirstName { get; set; } [Required(ErrorMessage = "Please enter your Last Name")] public string LastName { get; set; } } The existing summary <%=Html.ValidationSummary %> if nothing is entered looks like this. The following error(s) must be corrected before proceeding to the next step * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name The design calls for headings to be inserted like this: The following error(s) must be corrected before proceeding to the next step Person 1 * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name Person 2 * Please enter your Email Address * Please enter your Phone Number * Please enter your First Name * Please enter your Last Name

    Read the article

  • Batch File input validation - Make sure user entered an integer

    - by B2Ben
    I'm experimenting with a DOS batch file to perform a simple operation which requires the user to enter a non-negative integer. I'm using simple batch-file techniques to get user input: @ECHO OFF SET /P UserInput=Please Enter a Number: The user can enter any text they want here, so I would like to add some routine to make sure what the user entered was a valid number. That is... they entered at least one character, and every character is a number from 0 to 9. I'd like something I can feed the UserInput into. At the end of the routine would be like an if/then that would run different statements based on whether or not it was actually a valid number. I've experimented with loops and substrings and such, but my knowledge and understanding is still slim... so any help would be appreciated. I could build an executable, and I know there are nicer ways to do things than batch files, but at least for this task I'm trying to keep it simple by using a batch file.

    Read the article

  • Binding a nullable int to an asp:TextBox

    - by Slauma
    I have a property int? MyProperty as a member in my datasource (ObjectDataSource). Can I bind this to a TextBox, like <asp:TextBox ID="MyTextBox" runat="server" Text='<%# Bind("MyProperty") %>' /> Basically I want to get a null value displayed as blank "" in the TextBox, and a number as a number. If the TextBox is blank MyProperty shall be set to null. If the TextBox has a number in it, MyProperty should be set to this number. If I try it I get an exception: "Blank is not a valid Int32". But how can I do that? How to work with nullable properties and Bind? Thanks in advance!

    Read the article

  • Why is the JVM stack-based and the DalvikVM register based?

    - by aioobe
    I'm curious, why did Sun decide to make the JVM stack-based and Google decide to make the DalvikVM register based? I suppose the JVM can't really assume that a certain number of registers are available on the target platform, since it is supposed to be platform independent. Therefor it just postpones the register-allocation etc, to the JIT compiler. (Correct me if I'm wrong.) So the Android guys thought, "hey, that's inefficient, let's go for a register based vm right away..."? But wait, there are multiple different android devices, what number of registers did the Dalvik target? Are the Dalvik opcodes hardcoded for a certain number of registers? Do all current Android devices on the market have about the same number of registers? Or, is there a register re-allocation performed during dex-loading? How does all this fit together?

    Read the article

  • Algorithm for digit summing?

    - by Joe
    I'm searching for an algorithm for Digit summing. Let me outline the basic principle: Say you have a number: 18268. 1 + 8 + 2 + 6 + 8 = 25 2 + 5 = 7 And 7 is our final number. It's basically adding each number of the whole number until we get down to a single (also known as a 'core') digit. It's often used by numerologists. I'm searching for an algorithm (doesn't have to be language in-specific) for this. I have searched Google for the last hour with terms such as digit sum algorithm and whatnot but got no suitable results. Any help would be great, thanks.

    Read the article

  • Using for or while loops

    - by Gary
    Every month, 4 or 5 text files are created. The data in the files is pulled into MS Access and used in a mailmerge. Each file contains a header. This is an example: HEADER|0000000130|0000527350|0000171250|0000058000|0000756600|0000814753|0000819455|100106 The 2nd field is the number of records contained in the file (excluding the header line). The last field is the date in the form yymmdd. Using gawk (for Windows), I've done ok with rearranging/modifying the data and writing it all out to a new file for importing into Access except for the following. I'm trying to create a unique ID number for each record. The ID number has the form 1mmddyyXXXX, where XXXX is a number, padded with leading zeros. Using the header above, the first record in the output file would get the ID number 10106100001 and the last record would get the ID 10106100130. I've tried putting the second field in the header into a variable, rearranging the last header field into the required date format and then looping with "for" statements to append the XXXX part of the ID and then outputting it all with printf but so far I've been complete rubbish at it. Thanks for your help! gary

    Read the article

  • programming logic and design pleas friends i need a flowcharts or pseudocode

    - by alex
    ***the midvile park maintains records containing info about players on it's soccer teams . each record contain a players first name,last name,and team number . the team are team number team name 1 goal getters 2 the force 3 top gun 4 shooting stars 5 midfield monsters design a proggram that accept player data and creates a report that lists each** player a long with his or her team number and team name**

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >