Search Results

Search found 1304 results on 53 pages for 'martin petrov'.

Page 5/53 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Disable a style in VCL application

    - by Martin
    I am adding the a VCL style into my application but am also giving the users an option to turn this off but I cannot figure out how to do this globally at runtime. Setting "TStyleManager.AutoDiscoverStyleResources := false" almost works but it pops up with an error message saying "Style {style set} not found" but after dismissing the message does exactly what I want. This code I would expect to work but does not... if (not ParamObj.UseDarkStyle) then begin //TStyleManager.AutoDiscoverStyleResources := false; TStyleManager.SetStyle(TStyleManager.SystemStyle); end; I also tried (originally) TStyleManager.TrySetStyle('Windows'); but this also does not work. I have tried this both sides of "Application.Initialize;" with no difference What am I missing? Thanks in advance, Martin

    Read the article

  • Encoding non-English characters

    - by Martin
    Hey there! I'm having a bit of trouble here and I was hoping someone throws me a hint :) I'm getting some GET VARS with JS but I have trouble with non-latin charsets: cyrillic for example. The cyrillic var appears correct in the url but when I retrieve it with JS I get some dummy string. I was wondering of a function similar to "unescape" for such a case. Alternatively, if someone knows a way I could convert a cyrillic string to the same dummy string I get from the URL, it will still do me the trick, since all I need is compare. :) Thanks! Martin

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • Mandatory Parameters in Request object (WCF)

    - by Martin Moser
    lHi, I'm currently writing a WCF service. One of those methods get's a request object and returns a response object. In the request there are a couple of value-type members. Is there a way to define members are mandatory in the declarative way? I'm in an early stage of development and I don't want to start with versioning now. In addition I don't want to have method sig with 25 parameters, therefore I created the request object. The problem I have is that due to the value-types, I can never be sure if the consumer of the service intended to have the default value in there, or it was just by lazyness. On consumer side you don't easily detect that you probably missed that property. So I would like to have something that forces the caller of the service to provide an value, and if not he ideally get's a compile-time error. any ideas? tia, Martin

    Read the article

  • VBScript - copy files modified in last 24 hours

    - by Martin North
    Hi, I'm trying to copy files from a directory where the last modified date is within 24hours of the current date. I'm using a wildcard in the filepath as it changes every day I'm using; option explicit dim fileSystem, folder, file dim path path = "d:\x\logs" Set fileSystem = CreateObject("Scripting.FileSystemObject") Set folder = fileSystem.GetFolder(path) for each file in folder.Files If DateDiff("d", file.DateLastModified, Now) < 1 Then filesystem.CopyFile "d:\x\logs\apache_access_log-*", "d:\completed logs\" WScript.Echo file.Name & " last modified at " & file.DateLastModified end if next Unfortunately this seems to be copying all files, and not just the recently modified ones. Can anyone point me in the right direction? many thanks Martin.

    Read the article

  • Simplest way to use a DatagridView with Linq to SQL

    - by Martín Marconcini
    Hi, I have never used datagrids and such, but today I came across a simple problem and decided to "databind" stuff to finish this faster, however I've found that it doesn't work as I was expecting. I though that by doing something as simple as: var q = from cust in dc.Customers where cust.FirstName == someString select cust; var list = new BindingList<Customer>(q.ToList()); return list; Then using that list in a DataGridView1.DataSource was all that I needed, however, no matter how much I google, I can't find a decent example on how to populate (for add/edit/modify) the results of a single table query into a DataGridView1. Most samples talk about ASP.NET which I lack, this is WinForms. Any ideas? I've came across other posts and the GetNewBindingList, but that doesn't seem to change much. What am I missing (must be obvious)? Thanks in advance. Martin.

    Read the article

  • Which data structure for List of objects + datagrid wiev

    - by Martin
    Hi, I have to develop a code which will store a list of objects, as example below 101, value 11, value 12, value 13 ...etc 102, value 21, value 22, value 23 ...etc 103, value 31, value 32, value 33 ...etc 104, value 41, value 42, value 43 ...etc Now, the difficulty is, that first column is an identifier, and whole table should always be sorted by it. Easy access to each element is required. Additionally, list should be easily updated, and extended by adding element at the end as well as in front and still keep being sorted by first column. Finally, I would like to be able to display values of the above in datagridview. What is most important is a performance of the implementation, as rows will be updated many times per second, and datagridview should be able to display all changes immediately. I was thinking about creating class for the values, and then a Dictionary but encountered a problem with displaying values in gridview. What would be the most optimal way of implementing the code? Thanks in advance Martin

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • Excel VBA: Passing a collection from a class to a module issue

    - by Martin
    Hello, I have been trying to return a collection from a property within a class to a routine in a normal module. The issue I am experiencing is that the collection is getting populated correctly within the property in the class (FetchAll) but when I pass the collection back to the module (Test) all the entries are populated with the last item in the list. This is the Test sub-routine in the standard module: Sub Test() Dim QueryType As New QueryType Dim Item Dim QueryTypes As Collection Set QueryTypes = QueryType.FetchAll For Each Item In QueryTypes Debug.Print Item.QueryTypeID, _ Left(Item.Description, 4) Next Item End Sub This is the FetchAll property in the QueryType class: Public Property Get FetchAll() As Collection Dim RS As Variant Dim Row As Long Dim QTypeList As Collection Set QTypeList = New Collection RS = .Run ' populates RS with a record set from a database (as an array), ' some code removed ' goes through the array and sets up objects for each entry For Row = LBound(RS, 2) To UBound(RS, 2) Dim QType As New QueryType With QType .QueryTypeID = RS(0, Row) .Description = RS(1, Row) .Priority = RS(2, Row) .QueryGroupID = RS(3, Row) .ActiveIND = RS(4, Row) End With ' adds new QType to collection QTypeList.Add Item:=QType, Key:=CStr(RS(0, Row)) Debug.Print QTypeList.Item(QTypeList.Count).QueryTypeID, _ Left(QTypeList.Item(QTypeList.Count).Description, 4) Next Row Set FetchAll = QTypeList End Property This is the output I get from the debug in FetchAll: 1 Numb 2 PBM 3 BPM 4 Bran 5 Claw 6 FA C 7 HNW 8 HNW 9 IFA 10 Manu 11 New 12 Non 13 Numb 14 Repo 15 Sell 16 Sms 17 SMS 18 SWPM This is the output I get from the debug in Test: 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM 18 SWPM Anyone got any ideas? I am probably totally overlooking something! Thanks, Martin

    Read the article

  • Version Control and Coding Formatting

    - by Martin Giffy D'Souza
    Hi, I'm currently part of the team implementing a new version control system (Subversion) within my organization. There's been a bit of a debate on how to handle code formatting and I'd like to get other peoples opinions and experiences on this topic. We currently have ~10 developers each using different tools (due to licensing and preference). Some of these tools have automatic code formatters and others don't. If we allow "blind" checkins the code will look drastically different each time someone does a check in. This will make things such as diffs and merges complicated. I've talked to several people and they've mentioned the following solutions: Use the same developer program with the same code formatter (not really an option due to licensing) Have a hook (either client or server side) which will automatically format the code before going into the repository Manually format the code. Regarding the 3rd point, the concept is to never auto-format the code and have some standards. Right now that seems to be what we're leaning towards. I'm a bit hesitant on that approach as it could lead to developers spending a lot of time manually formatting code. If anyone can please provide some their thoughts and experience on this that would be great. Thank you, Martin

    Read the article

  • Assigning specific menu administration rights for roles in drupal

    - by Martin Andersson
    Hello folks. I'm trying to give one of my roles the administrative rights to add/remove content in a specific menu (but not all menus). I think I found a module that should enable something like this, http://drupalmodules.com/module/delegate-menu-administration I've followed the instructions, added the role to my user, checked the "administer some menus" value for that role and checked the "Make admin" field for that role and specific menu in Menus. I also gave the role permissions to change page and story content. However, it still wont let the user add any new content it creates in any menu, and I get an error message saying "warning: Invalid argument supplied for foreach() in /home/martin/www/drupal/modules/delegate_menu_admin/delegate_menu_admin.module on line 346." Line 346 looks like this: foreach ($form['menu']['parent']['#options'] as $key => $value) { I did a print_r($form); in the file just before it and there's no such array that I can see: [menu] => Array ( [#access] => 1 [delete] => Array ( [#access] => ) ) When I gave the role "administer menu" permissions, nothing extra was printed at all, leading me to the assumption that the delegate_menu_admin.module file is not used at all while both the "administer menu" and the "administer some menus" (from the delegate-menu-administration module) permissions are set! Is this some incompatibility between the module because of some drupal update? Or am I just too tired and too stupid? :)

    Read the article

  • Loading the target of a link in a <DIV> via jQuery's .live event into the same <DIV>??

    - by Martin Pescador
    Hello together I call a certain div from another page with jquery to be loaded into a div on my main page like this: <script type="text/javascript"> $("#scotland").load("http://www.example.com/scotland .gallery"); </script> <div id="scotland"></div> The div I call is a piece of code which is automatically generated by a CMS made simple module, by the way. Now it comes to my problem: The .gallery div I call, looks, a little simplified, like this: <div class="gallery"> <span><img src="http://www.example.com/scotlandimage1.jpg"></span> <span class="imgnavi"><a href="link_to_next_page_with_one_image">Next image</href></span> </div> I want the "next image"-link to load the next page into the .gallery div (it is always a page with one image on it). But what it does, is, it opens the new page http://www.example.com/scotland only. I tried to use jquerys .live event to load the linked page (that would be "scotlandimage2" and the navigation, as you can see in the upper part - not only the image!), but I must have done something wrong. I tried different ways, but never got it to work. This was my last try: $(".imgnavi a").click(function() { var myUrl = $(this).attr("href"); $(".gallery").load(myUrl); return false; }); I have to admit that I am very new to jquery... But does someone know what I did wrong (do I even follow the right handlers?)? Thanks very much in advance! Martin

    Read the article

  • How to visualize a list of lists of lists of ... in R?

    - by Martin
    Hi there, I have a very deep list of lists in R. Now I want to print this list to the standard output to get a better overview of the elements. It should look like the way the StatET plugin for eclipse shows a list. Example list: l6 = list() l6[["h"]] = "one entry" l6[["g"]] = "nice" l5 = list() l5[["e"]] = l6 l4 = list() l4[["f"]] = "test" l4[["d"]] = l5 l3 = list() l3[["c"]] = l4 l2 = list() l2[["b"]] = l3 l1 = list() l1[["a"]] = l2 This should print like: List of 1 $ a:List of 1 ..$ b:List of 1 .. ..$ c:List of 2 .. .. ..$ f: chr "test" .. .. ..$ d:List of 1 .. .. .. ..$ e:List of 2 .. .. .. .. ..$ h: chr "one entry" .. .. .. .. ..$ g: chr "nice" I know this is possible with recursion and the deepness. But is there a way to do this with the help of rapply or something like that? Thanx in advance, Martin

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Why does ANTLR not parse the entire input?

    - by Martin Wiboe
    Hello, I am quite new to ANTLR, so this is likely a simple question. I have defined a simple grammar which is supposed to include arithmetic expressions with numbers and identifiers (strings that start with a letter and continue with one or more letters or numbers.) The grammar looks as follows: grammar while; @lexer::header { package ConFreeG; } @header { package ConFreeG; import ConFreeG.IR.*; } @parser::members { } arith: term | '(' arith ( '-' | '+' | '*' ) arith ')' ; term returns [AExpr a]: NUM { int n = Integer.parseInt($NUM.text); a = new Num(n); } | IDENT { a = new Var($IDENT.text); } ; fragment LOWER : ('a'..'z'); fragment UPPER : ('A'..'Z'); fragment NONNULL : ('1'..'9'); fragment NUMBER : ('0' | NONNULL); IDENT : ( LOWER | UPPER ) ( LOWER | UPPER | NUMBER )*; NUM : '0' | NONNULL NUMBER*; fragment NEWLINE:'\r'? '\n'; WHITESPACE : ( ' ' | '\t' | NEWLINE )+ { $channel=HIDDEN; }; I am using ANTLR v3 with the ANTLR IDE Eclipse plugin. When I parse the expression (8 + a45) using the interpreter, only part of the parse tree is generated: http://imgur.com/iBaEC.png Why does the second term (a45) not get parsed? The same happens if both terms are numbers. Thank you, Martin Wiboe

    Read the article

  • Running an executable on network share with CustomAction with wix?

    - by martin
    Hello, i have created a msi-package which compresses some xml-files to a zip-file during installation. I have created a CustomAction for this purposes: <CustomAction Id="CompressMy" BinaryKey="zipEXE" ExeCommand="a -tzip &quot;[TEMPLATE_DIR]my.zip&quot; &quot;[TempSourceFolder]data.xml&quot;" Return="check" HideTarget="no" Impersonate="no" Execute="deferred" /> The installation works fine, if i try to install to a local drive, but recently a customer wanted to install [TEMPLATE_DIR] to a network drive on Windows Vista. The CustomAction fails, because of the elevated install-user hasn't mapped the network drive, even if the installer-calling user has mapped the drive. This happens also, if I try to install to an unc-path. I use 7zip for compressing. I have added it to my msi-package. I have tried to set Impersonate="yes", but then the Installations fails, if my TEMPLATE_DIR is f.e. the ProgramData-dir. Do you have any idea what i can do? I thinked about checking if TEMPLATE_DIR is a network path, but I didn't know how i can check this. Or do you have any other Ideas how I can provide a local and a network installation while using this custom action. Would be great if there are any advices, greetings, Martin

    Read the article

  • How to access elements in a complex list?

    - by Martin
    Hi, it's me and the lists again. I have a nice list, which looks like this: tmp = NULL t = NULL tmp$resultitem$count = "1057230" tmp$resultitem$status = "Ok" tmp$resultitem$menu = "PubMed" tmp$resultitem$dbname = "pubmed" t$resultitem$count = "305215" t$resultitem$status = "Ok" t$resultitem$menu = "PMC" t$resultitem$dbname = "pmc" tmp = c(tmp, t) t = NULL t$resultitem$count = "1" t$resultitem$status = "Ok" t$resultitem$menu = "Journals" t$resultitem$dbname = "journals" tmp = c(tmp, t) Which produces: str(tmp) List of 3 $ resultitem:List of 4 ..$ count : chr "1057230" ..$ status: chr "Ok" ..$ menu : chr "PubMed" ..$ dbname: chr "pubmed" $ resultitem:List of 4 ..$ count : chr "305215" ..$ status: chr "Ok" ..$ menu : chr "PMC" ..$ dbname: chr "pmc" $ resultitem:List of 4 ..$ count : chr "1" ..$ status: chr "Ok" ..$ menu : chr "Journals" ..$ dbname: chr "journals" Now I want to search through the elements of each "resultitem". I want to know the "dbname" for every database, that has less then 10 "count" (example). In this case it is very easy, as this list only has 3 elements, but the real list is a little bit longer. This could be simply done with a for loop. But is there a way to do this with some other function of R (like rapply)? My problem with those apply functions is, that they only look at one element. If I do a grep to get all "dbname" elements, I can not get the count of each element. rapply(tmp, function(x) paste("Content: ", x))[grep("dbname", names(rapply(tmp, c)))] Does someone has a better idea than a for loop? Thanx, Martin

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • Problem declaring and calling internal metthods

    - by Martin
    How do I declare and use small helper functions inside my normal methods ? In on of my objective-c methods I need a function to find an item within a string -(void) Onlookjson:(id) sender{ NSString * res = getKeyValue(res, @"Birth"); } I came up with a normal C type declaration for helper function getKeyvalue like this NSString * getKeyvalue(NSString * s, NSString key){ NSString *trm = [[s substringFromIndex:2] substringToIndex:[s length]-3]; NSArray *list = [trm componentsSeparatedByString:@";"]; .... NSString res; res = [list objectAtIndex:1]; ... return res; } Input data string: { Birth = ".."; Death = "..."; ... } Anyway I get an exception "unrecognized selector sent to instance" for any of the two first lines in the helper function How do I declare helper functions that are just to be used internally and how to call them safely ? regards Martin

    Read the article

  • Creating a RESTful API - HELP!

    - by Martin Cox
    Hi Chaps Over the last few weeks I've been learning about iOS development, which has naturally led me into the world of APIs. Now, searching around on the Internet, I've come to the conclusion that using the REST architecture is very much recommended - due to it's supposed simplicity and ease of implementation. However, I'm really struggling with the implementation side of REST. I understand the concept; using HTTP methods as verbs to describe the action of a request and responding with suitable response codes, and so on. It's just, I don't understand how to code it. I don't get how I map a URI to an object. I understand that a GET request for domain.com/api/user/address?user_id=999 would return the address of user 999 - but I don't understand where or how that mapping from /user/address to some method that queries a database has taken place. Is this all coded in one php script? Would I just have a method that grabs the URI like so: $array = explode("/", ltrim(rtrim($_SERVER['REQUEST_URI'], "/"), "/")) And then cycle through that array, first I would have a request for a "user", so the PHP script would direct my request to the user object and then invoke the address method. Is that what actually happens? I've probably not explained my thought process very well there. The main thing I'm not getting is how that URI /user/address?id=999 somehow is broken down and executed - does it actually resolve to code? class user(id) { address() { //get user address } } I doubt I'm making sense now, so I'll call it a day trying to explain further. I hope someone out there can understand what I'm trying to say! Thanks Chaps, look forward to your responses. Martin p.s - I'm not a developer yet, I'm learning :)

    Read the article

  • MVVM for Dummies

    - by Martin Hinshelwood
    I think that I have found one of the best articles on MVVM that I have ever read: http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/432/MVVM-for-Tarded-Folks-Like-Me-or-MVVM-and-What-it-Means-to-Me.aspx This article sums up what is in MVVM and what is outside of MVVM. Note, when I and most other people say MVVM, they really mean MVVM, Commanding, Dependency Injection + any other Patterns you need to create your application. In WPF a lot of use is made of the Decorator and Behaviour pattern as well. The goal of all of this is to have pure separation of concerns. This is what every code behind file of every Control / Window / Page  should look like if you are engineering your WPF and Silverlight correctly: C# – Ideal public partial class IdealView : UserControl { public IdealView() { InitializeComponent(); } } Figure: This is the ideal code behind for a Control / Window / Page when using MVVM. C# – Compromise, but works public partial class IdealView : UserControl { public IdealView() { InitializeComponent(); this.DataContext = new IdealViewModel(); } } Figure: This is a compromise, but the best you can do without Dependency Injection VB.NET – Ideal Partial Public Class ServerExplorerConnectView End Class Figure: This is the ideal code behind for a Control / Window / Page when using MVVM. VB.NET – Compromise, but works Partial Public Class ServerExplorerConnectView Private Sub ServerExplorerConnectView_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Me.DataContext = New ServerExplorerConnectViewModel End Sub End Class Figure: This is a compromise, but the best you can do without Dependency Injection Technorati Tags: MVVM,.NET,WPF,Silverlight

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >