Search Results

Search found 29913 results on 1197 pages for 'content manager assistant'.

Page 43/1197 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Pagination and duplicate content

    - by jazz090
    I have an archive page that displays the number of articles published. Because there were so many, I ran a pagination script: for 127.0.0.1/archive/2/?p=x&pp=y where p is the page number and pp is number of articles to display per page. The pagination looks like this: Prev 1 2 3 4 ... 12 NEXT with each item linking to p like <a href="?p=x">x</a>. I also have the items per page setter: 25 | 50 | 100 (<a href="?pp=y">y</a>). Now I have a PHP script that fixes pp into a session variable. But I am worried about duplicate content (since incrementing pp values will be inclusive) and also content not getting indexed because its not in the pagination link. so in the example above, pages 5-11 will not be indexed. Any ideas on how to fix this?

    Read the article

  • Why is the upgrade manager freezing when upgrading to 13.10

    - by Nil
    Earlier today I started to upgrade to 13.10 only to return much much later and notice that the update manager was still running. It seems to be frozen and I am reluctant to hit ctrl+C. I can't launch nautilus using the icon on the launcher. When I try to run it via the terminal, this is what happens: $ nautilus Could not register the application: GDBus.Error:org.freedesktop.DBus.Error.UnknownMethod: No such interface `org.gtk.Actions' on object at path /org/gnome/Nautilus My printers aren't showing up when I attempt to print. I don't know whether these a symptoms of the same problem. Should I let the update manage continue to run, or should I shut it down? Here are the processes running if it is any help: $ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 5920 4072 ? Ss Oct18 0:02 /sbin/init root 2 0.0 0.0 0 0 ? S Oct18 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Oct18 0:31 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/0:0H] root 6 0.0 0.0 0 0 ? S Oct18 0:00 [kworker/u:0] root 7 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/u:0H] root 8 0.0 0.0 0 0 ? S Oct18 0:00 [migration/0] root 9 0.0 0.0 0 0 ? S Oct18 0:00 [rcu_bh] root 10 0.0 0.0 0 0 ? S Oct18 0:54 [rcu_sched] root 11 0.0 0.0 0 0 ? S Oct18 0:00 [watchdog/0] root 12 0.0 0.0 0 0 ? S Oct18 0:00 [watchdog/1] root 13 0.0 0.0 0 0 ? S Oct18 0:43 [ksoftirqd/1] root 14 0.0 0.0 0 0 ? S Oct18 0:00 [migration/1] root 16 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/1:0H] root 17 0.0 0.0 0 0 ? S< Oct18 0:00 [cpuset] root 18 0.0 0.0 0 0 ? S< Oct18 0:00 [khelper] root 19 0.0 0.0 0 0 ? S Oct18 0:00 [kdevtmpfs] root 20 0.0 0.0 0 0 ? S< Oct18 0:00 [netns] root 21 0.0 0.0 0 0 ? S Oct18 0:00 [bdi-default] root 22 0.0 0.0 0 0 ? S< Oct18 0:00 [kintegrityd] root 23 0.0 0.0 0 0 ? S< Oct18 0:00 [kblockd] root 24 0.0 0.0 0 0 ? S< Oct18 0:00 [ata_sff] root 25 0.0 0.0 0 0 ? S Oct18 0:00 [khubd] root 26 0.0 0.0 0 0 ? S< Oct18 0:00 [md] root 27 0.0 0.0 0 0 ? S< Oct18 0:00 [devfreq_wq] root 29 0.0 0.0 0 0 ? S Oct18 0:00 [khungtaskd] root 30 0.0 0.0 0 0 ? S Oct18 0:09 [kswapd0] root 31 0.0 0.0 0 0 ? SN Oct18 0:00 [ksmd] root 32 0.0 0.0 0 0 ? SN Oct18 0:00 [khugepaged] root 33 0.0 0.0 0 0 ? S Oct18 0:00 [fsnotify_mark] root 34 0.0 0.0 0 0 ? S Oct18 0:00 [ecryptfs-kthre root 35 0.0 0.0 0 0 ? S< Oct18 0:00 [crypto] root 46 0.0 0.0 0 0 ? S< Oct18 0:00 [kthrotld] root 49 0.0 0.0 0 0 ? S< Oct18 0:00 [binder] root 69 0.0 0.0 0 0 ? S< Oct18 0:00 [deferwq] root 70 0.0 0.0 0 0 ? S< Oct18 0:00 [charger_manage root 166 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_0] root 167 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_1] root 188 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_2] root 244 0.0 0.0 0 0 ? S Oct18 0:00 [kworker/u:4] root 245 0.0 0.0 0 0 ? S< Oct18 0:00 [ttm_swap] root 260 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_3] root 266 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_4] root 267 0.0 0.0 0 0 ? S Oct18 1:08 [usb-storage] root 268 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_5] root 269 0.0 0.0 0 0 ? S Oct18 0:06 [usb-storage] root 302 0.0 0.0 2904 504 ? S 14:11 0:00 upstart-udev-br root 305 0.0 0.0 12080 1632 ? Ss 14:11 0:00 /lib/systemd/sy root 329 0.0 0.0 0 0 ? S Oct18 0:24 [jbd2/sda2-8] root 330 0.0 0.0 0 0 ? S< Oct18 0:00 [ext4-dio-unwri root 352 0.0 0.0 2944 4 ? S Oct18 0:00 /sbin/ureadahea root 440 0.0 0.0 0 0 ? S Oct18 0:05 [flush-8:0] root 734 0.0 0.0 0 0 ? S< Oct18 0:00 [cfg80211] root 761 0.0 0.0 0 0 ? S< Oct18 0:00 [kpsmoused] root 780 0.0 0.0 0 0 ? S Oct18 0:00 [pccardd] root 784 0.0 0.0 0 0 ? S< Oct18 0:00 [kvm-irqfd-clea root 902 0.0 0.0 0 0 ? S< Oct18 0:00 [hd-audio0] syslog 916 0.0 0.0 31120 680 ? Sl Oct18 0:13 rsyslogd -c5 102 1010 0.0 0.1 4344 1988 ? Ss Oct18 0:04 dbus-daemon --s root 1061 0.0 0.0 4844 924 ? Ss Oct18 0:00 /usr/sbin/bluet root 1077 0.0 0.0 2268 388 ? Ss Oct18 0:00 /bin/sh /etc/in root 1079 0.0 0.0 4664 484 tty4 Ss+ Oct18 0:00 /sbin/getty -8 root 1087 0.0 0.0 4664 484 tty5 Ss+ Oct18 0:00 /sbin/getty -8 root 1089 0.0 0.0 0 0 ? S< Oct18 0:00 [krfcommd] root 1098 0.0 0.0 4664 484 tty2 Ss+ Oct18 0:00 /sbin/getty -8 root 1099 0.0 0.1 4408 2076 tty3 Ss Oct18 0:00 /bin/login -- root 1101 0.0 0.0 4664 484 tty6 Ss+ Oct18 0:00 /sbin/getty -8 root 1168 0.0 0.0 2780 524 ? Ss Oct18 0:00 cron daemon 1169 0.0 0.0 2636 212 ? Ss Oct18 0:00 atd root 1183 0.0 0.0 34872 1448 ? SLsl Oct18 0:00 lightdm root 1249 0.0 0.0 3536 468 ? S Oct18 0:00 /bin/bash /etc/ root 1254 4.2 2.2 125832 40040 tty7 Rsl+ Oct18 81:27 /usr/bin/X :0 - root 1261 0.0 0.0 2268 344 ? S Oct18 0:00 /bin/sh /etc/ac root 1265 0.0 0.1 42004 2836 ? Ssl Oct18 0:01 NetworkManager root 1272 0.0 0.0 2268 376 ? S Oct18 0:00 /bin/sh /usr/sb root 1286 0.0 0.3 30616 5824 ? Sl Oct18 0:05 /usr/lib/policy root 1304 0.0 0.0 2268 372 ? D Oct18 0:00 /bin/sh /usr/li root 1360 0.0 0.0 5532 560 ? S Oct18 0:00 /sbin/dhclient nobody 1368 0.0 0.0 5476 784 ? S Oct18 0:00 /usr/sbin/dnsma root 1514 0.0 0.1 34036 1932 ? Sl Oct18 0:01 /usr/lib/accoun root 1530 0.0 0.0 0 0 ? S Oct18 0:00 [kauditd] root 1536 0.0 0.1 30480 2260 ? Sl Oct18 0:01 /usr/sbin/conso root 1653 0.0 0.1 28908 2104 ? Sl Oct18 0:00 /usr/lib/upower root 1698 0.0 0.0 17464 1388 ? Sl Oct18 0:00 lightdm --sessi rtkit 1750 0.0 0.0 21368 696 ? SNl Oct18 0:00 /usr/lib/rtkit/ 1000 1844 0.0 0.1 88116 2320 ? SLl Oct18 0:00 /usr/bin/gnome- 1000 1855 0.0 0.3 73076 5884 ? Ssl Oct18 0:00 gnome-session - 1000 1901 0.0 0.0 4128 24 ? Ss Oct18 0:00 /usr/bin/ssh-ag 1000 1904 0.0 0.0 3880 192 ? S Oct18 0:00 /usr/bin/dbus-l 1000 1905 0.0 0.1 5520 2500 ? Ss Oct18 0:23 //bin/dbus-daem 1000 1915 0.0 0.0 43348 1420 ? Sl Oct18 0:00 /usr/lib/at-spi 1000 1919 0.0 0.0 3412 1252 ? S Oct18 0:01 /bin/dbus-daemo 1000 1922 0.0 0.0 17176 1624 ? Sl Oct18 0:00 /usr/lib/at-spi 1000 1932 0.0 0.5 165916 9124 ? Sl Oct18 0:21 /usr/lib/gnome- 1000 1947 1.9 0.2 100716 4024 ? S<l Oct18 37:48 /usr/bin/pulsea 1000 1949 0.0 0.0 27568 1616 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 1953 0.0 0.0 42628 1184 ? Sl Oct18 0:00 /usr/lib/gvfs// 1000 1962 0.0 0.0 14472 916 ? S Oct18 0:00 /usr/lib/pulsea 1000 1964 0.0 0.1 9548 2480 ? S Oct18 0:00 /usr/lib/i386-l 1000 1980 0.0 0.0 3764 364 ? S Oct18 0:43 syndaemon -i 1. 1000 1987 0.0 0.0 24476 1668 ? Sl Oct18 0:00 /usr/lib/dconf/ 1000 1990 0.0 0.4 122968 8844 ? Sl Oct18 0:00 /usr/lib/policy 1000 1991 0.0 0.2 80480 5392 ? Sl Oct18 0:00 /usr/lib/gnome- 1000 1992 0.0 1.2 167532 22776 ? Sl Oct18 0:07 nautilus -n 1000 1998 0.0 0.4 181444 7744 ? Sl Oct18 0:00 nm-applet 1000 2002 0.0 0.1 38020 2892 ? Sl Oct18 0:00 /usr/lib/gvfs/g root 2012 0.0 0.1 59908 2664 ? Sl Oct18 0:24 /usr/lib/udisks 1000 2024 0.0 0.0 26456 1540 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2028 0.0 0.0 27684 1536 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2036 0.0 0.0 38964 1452 ? Sl Oct18 0:00 /usr/lib/gvfs/g root 2049 0.0 0.0 3328 588 ? Ss Oct18 0:00 /sbin/mount.ntf 1000 2053 0.0 0.0 36792 1284 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2058 0.0 0.1 53664 2364 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2069 0.0 0.4 82816 8112 ? Sl Oct18 0:07 /usr/lib/i386-l 1000 2084 0.0 0.1 17984 2048 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2086 0.0 0.0 2268 392 ? Ss Oct18 0:00 /bin/sh -c /usr 1000 2087 0.0 0.7 68100 12856 ? Sl Oct18 0:13 /usr/bin/gtk-wi 1000 2089 0.0 0.9 98508 17756 ? Sl Oct18 0:13 /usr/lib/unity/ 1000 2091 0.0 0.3 65380 6692 ? Sl Oct18 0:01 /usr/lib/i386-l 1000 2117 0.0 0.2 98024 3888 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2125 0.0 0.1 86644 3408 ? Sl Oct18 0:00 /usr/lib/indica 1000 2126 0.0 0.3 84272 6664 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2127 0.0 0.1 94384 2752 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2128 0.0 0.1 83968 2828 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2129 0.0 0.2 150020 4684 ? Sl Oct18 0:01 /usr/lib/i386-l 1000 2130 0.0 0.2 86572 3884 ? Sl Oct18 0:00 /usr/lib/indica 1000 2131 0.0 0.1 69352 2524 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2144 0.0 0.1 74192 3152 ? Sl Oct18 0:00 /usr/lib/evolut 1000 2182 0.0 0.2 101120 4420 ? Sl Oct18 0:02 /usr/lib/gnome- 1000 2193 0.0 0.3 77752 6448 ? Sl Oct18 0:00 telepathy-indic 1000 2200 0.0 0.1 44032 2708 ? Sl Oct18 0:00 /usr/lib/telepa 1000 2209 0.0 0.2 77664 3860 ? Sl Oct18 0:02 zeitgeist-datah 1000 2216 0.0 0.2 44464 4180 ? Sl Oct18 0:01 /usr/bin/zeitge root 2234 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/1:1H] 1000 2246 0.0 1.1 93428 21256 ? Sl Oct18 0:02 /usr/bin/python 1000 2284 0.0 0.6 110040 11656 ? Sl Oct18 0:14 /usr/lib/i386-l 1000 2289 0.0 0.2 85632 3728 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2296 0.0 0.1 77900 3388 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2298 0.0 0.6 120356 11992 ? Sl Oct18 0:00 /usr/bin/python 1000 2300 0.0 0.1 87560 2408 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2301 0.0 0.2 91764 4404 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2303 0.0 0.2 78224 4592 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2370 0.0 0.2 74976 4908 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2372 0.0 0.4 106760 8972 ? Sl Oct18 0:00 /usr/bin/python 1000 2394 0.0 0.1 95624 2736 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2433 0.0 0.1 46640 2124 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2457 0.0 0.0 34496 1648 ? Sl Oct18 0:00 /usr/lib/libuni root 2513 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/0:1H] 1000 3361 0.0 0.0 2268 396 ? SN 07:54 0:20 /bin/sh -c /usr root 4919 1.8 2.1 201196 38760 ? SNl 13:29 11:25 /usr/bin/python root 4957 0.0 0.0 3880 400 ? SN 13:29 0:00 dbus-launch --a root 4958 0.0 0.0 3424 1196 ? SNs 13:29 0:05 //bin/dbus-daem root 5128 0.0 0.0 2268 416 ? SN 13:50 0:00 /bin/sh -c whil root 5141 0.0 0.0 2436 508 ? SN 13:50 0:00 gnome-pty-helpe root 5145 0.0 1.7 245280 30872 pts/1 SNs+ 13:50 0:05 /usr/bin/python root 5159 0.0 0.4 64200 7432 ? SNl 13:50 0:05 /usr/bin/gnome- root 5163 0.0 0.0 27440 1552 ? SNl 13:50 0:00 /usr/lib/gvfs/g root 5167 0.0 0.0 42628 1648 ? SNl 13:50 0:00 /usr/lib/gvfs// root 9236 0.0 0.1 19112 2680 ? Ss 14:33 0:00 /usr/sbin/winbi root 9243 0.0 0.0 19112 1448 ? S 14:33 0:00 /usr/sbin/winbi whoopsie 9409 0.0 0.2 53608 4264 ? Ssl 14:33 0:00 whoopsie root 20087 0.0 0.0 0 0 ? S< 14:34 0:00 [xfsalloc] root 20088 0.0 0.0 0 0 ? S< 14:34 0:00 [xfs_mru_cache] root 20089 0.0 0.0 0 0 ? S< 14:34 0:00 [xfslogd] root 20092 0.0 0.0 0 0 ? S 14:34 0:00 [jfsIO] root 20093 0.0 0.0 0 0 ? S 14:34 0:00 [jfsCommit] root 20094 0.0 0.0 0 0 ? S 14:34 0:00 [jfsCommit] root 20095 0.0 0.0 0 0 ? S 14:34 0:00 [jfsSync] root 20845 0.0 0.3 7980 6048 pts/2 SNs+ 14:29 0:04 /usr/bin/dpkg - root 23330 0.0 0.0 2896 568 ? S 14:09 0:00 upstart-file-br root 23332 0.0 0.0 2884 572 ? S 14:09 0:00 upstart-socket- root 24577 0.2 0.0 0 0 ? S 23:09 0:04 [kworker/1:2] root 24656 0.1 0.0 0 0 ? S 23:10 0:02 [kworker/0:0] 1000 24758 2.8 4.7 243692 85516 ? Sl 23:11 0:50 compiz root 25774 0.0 0.0 0 0 ? S< 14:39 0:00 [iprt] 1000 26128 5.5 10.3 641628 187420 ? Sl 23:27 0:46 /usr/lib/firefo root 26374 0.0 0.0 3964 720 ? Ss 14:39 0:02 /usr/sbin/irqba root 26534 0.0 0.0 0 0 ? S 23:34 0:00 [kworker/0:1] root 26564 0.0 0.0 0 0 ? S 23:35 0:00 [kworker/1:1] 1000 26664 0.0 0.1 6784 3068 tty3 S+ 23:36 0:00 -bash 1000 26936 15.2 1.3 67520 23672 ? Sl 23:39 0:21 gnome-system-mo root 26992 0.0 0.0 0 0 ? S 23:40 0:00 [kworker/1:0] root 27049 0.0 0.0 4248 288 ? SN 23:41 0:00 sleep 30 1000 27057 9.5 0.8 68624 16140 ? Rl 23:41 0:00 gnome-terminal 1000 27064 0.0 0.0 2440 704 ? S 23:41 0:00 gnome-pty-helpe 1000 27065 2.6 0.1 6344 2608 pts/3 Ss 23:41 0:00 bash 1000 27113 0.0 0.0 5240 1144 pts/3 R+ 23:41 0:00 ps aux root 28267 0.0 0.0 2216 632 ? Ss 14:39 0:00 acpid -c /etc/a root 28333 0.0 0.0 2272 552 pts/2 SN+ 14:39 0:00 /bin/sh /var/li root 29699 0.0 0.2 8384 4608 pts/2 SN+ 14:40 0:00 modprobe wl

    Read the article

  • Content, MetaData and Taxonomy 2 Overview of the Data Layer

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. In the first blog in this series I covered the Taxonomy Manager...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Recommendations for a network of student-related content

    - by Javier Marín
    I am running a network of websites with notes, homeworks, essays, etc. where users share their own content. I'm having real trouble with the latest Google updates (penguin, panda, etc) because the content is mainly poor-quality and with the same topic. For that reason, I want to create more websites and have more probabilites to appear in the SERPs. My question is: does Google analyzes related websites in order to exclude it from the results? I've think about distribute the websites around the world, in different hostings, but I'm afraid that Google would link it by their analytics, webmaster tools or adsense account, is that possible? What other recommendations do you have?

    Read the article

  • How to stop gold-plating and just be content to release working developments

    - by Andy Bowskill
    The development team that I'm a member of has recently adapted to work according to Agile practices. This has personally highlighted the fact that I can't stop myself gold-plating code (and documentation) and I consequently exceed original estimates, when I could've delivered solutions that meet the requirements much earlier. I think my ethic is bordering on the obsessive in that I become too attached to my code and am rarely content to release before I've refactored and perfected it to the nth degree. I am happy that I have realised this but how can I change my attitude/mentality to be content with my progress and release on-time instead?

    Read the article

  • Why is my content database so large?

    - by PeterBrunone
    If your SharePoint site collection hasn't grown, but your content database has, the most likely culprit is versioning.  If a list -- or worse, a library -- has versioning enabled, the default is to keep every single one.  That means that every time someone edits and checks in a document, its storage footprint increases by the size of the document (and probably a little more).The solution?  It could be a bit painful, but you'll need to go back into each library and restrict the number of versions to keep (three is sufficient for most uses, but your needs may vary).  I suggest keeping only major versions as well, since minor versions are really just stopping points on the way to a published document.Of course if you have a real business need to keep all those versions around, then you'll want to look into an archiving solution that will take the old versions out of the content database but still make them available if necessary.

    Read the article

  • Will duplicate international (i18n) content hinder SEO rankings?

    - by Rhys
    Google clearly states that duplicate content within a single, or multiple, domains is not advised. This is understood, but I am not sure of any exceptions for sites with region-specific content that is often replicated across locales. For example, a site's /en-us/about page could be identical to /en-uk/about, whereas most likely /en-ja/about is unique. Are GYM smart enough to understand that the initial URL depth is a locale specifier? Is there any robots.txt or header, etc, trickery that I should include to outline the site's international structure?

    Read the article

  • Rehosting content from another server

    - by Lana_M
    We have a set of static pages that will augment a customer's existing site. The pages will not reside on the customer's servers for logistical reasons and because we need to maintain control of the content. The plan is for the customer to set up a mod_rewrite rule that will funnel certain types of URLs to a single server-side handler script that will grab the appropriate file from a CDN and just output its content. This illustrates the approach: <?php echo(file_get_contents(str_replace($customer_host, $cdn_host, $_SERVER['REQUEST_URI']))); ?> Can anyone think of pitfalls or offer up a different approach? Is there some way to circumvent a script altogether?

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • XNA content.load Dependancy

    - by Richard
    Quick question, My project i'm building for test purposes is working fine but i have dependencies flying around everywhere due to the XNA framework. In Update i have gametime passed everywhere... this is okay. In Draw i have gametime & spritebatch passed everywhere... this is okay. My issue is in the content.load textures/sounds/fonts. I have them as public variables ie Texture1 = Content.load(of texture2d)("Texture1") I'm passing a 'Game1' pointer into the constructor of every new class being instantiated to gain access to these variables. Am i missing an OOP trick to prevent me having to pass a pointer to 'game1' to every New class?

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

  • JavaOne 2012 Content Catalog is Available

    - by arungupta
    JavaOne 2012 Content Catalog is now available! The complete list of technical sessions, birds-of-feather, hands-on labs, tutorials and other details are available. We are still working on the overall schedule and it will be shared in the coming days. The conference will be held in San Francisco from September 30th to October 4th, 2012. You can also connect using the usual social media channels: facebook, twitter, blogs, linkedin, and mix. Oracle Open World, running in parallel to JavaOne, also has the content catalog available.

    Read the article

  • Microformats, Reviews and Duplicate Content

    - by Nicholas
    Let's say I have a site that sells widgets, and the URL structure is like so: /[type-of-widget]/[sub-type]/[widget-name]/ So, a URL for a widget might be: /screwdrivers/philips-screwdrivers/acme-big-screwdriver/ We show reviews on the widget page, and use the appropriate microformat data so Google knows it's a review, etc. Now, what if I want to show random reviews in the "sub-type" and "type-of-widget" landing pages? Will Google ding me for duplicate content, or is it smart enough to know (based on microformat data/etc.) that this is not duplicate content?

    Read the article

  • Dynamically change page content based on URL parameter?

    - by volume one
    The title of my question seems simple but here is an example of what I want to do: http://www.mayoclinic.com/health/infant-jaundice/DS00107 What happens on that page is whenever you click on a link to go a section (e.g. "Symptoms") in the article on "Infant Jaundice", it provides a URL parameter like this: http://www.mayoclinic.com/health/infant-jaundice/DS00107/DSECTION=symptoms As the DESCTION parameter changes, you get different content on the same page DS00107. The content changes as well as <meta keywords>. Can someone please tell me how this is achieved? I was thinking it was an if/else situation programmed into the page itself to display different properties depending on the URL parameter. Any help or suggestions are very much appreciated and my thanks to you for reading my question.

    Read the article

  • Thank You MySQL Connect Content Committee Members

    - by Bertrand Matthelié
    Yesterday we announced the publication of the MySQL Connect Content Catalog. We would like today to thank the MySQL Connect Content Committee members, and especially our external members, for their efforts helping us to build the best possible MySQL Connect program. The Call for Papers had generated a large number of great submissions (thank you all for that!) and it was indeed a tough job to select sessions among those. So thank you very much, Sheeri, Erin, Giuseppe, Calvin and Yoshinori! Your input has been invaluable. Learn more about MySQL Connect (San Francisco Sept 21-23). Register Now and Save US$500 with the Early Bird Discount.

    Read the article

  • Content Weighting and Sociology

    - by Chris Massey
    I’ve had loads of fantastic feedback on the concept and early curation wireframes I posted on the labs, and it’s led to some further thoughts on the topic of voting. More specifically, thoughts about the kinds of behaviour and values a platform encourages in it’s users via the set of available actions. StackOverflow is a very good example of this kind of sociology in action, not only via the set of available actions, but through the reputation system it uses to both reward and control it’s users. In our case (specifically, in the case of the curation model I’ve been talking about thus far), the main considerations are how the quality of content is judged, and how to make sure each piece of curated content gets a fair hearing. Based on the feedback and conversations I’ve had with many of you over the last few days, a few considerations came to light about how we might need to weight and display our curations, and I’ve written about that more extensively over on the labs themselves – have a read and let me know what you think.

    Read the article

  • Tracking scroll depth to reveal content engagement in Google Analytics

    This article investigates a way to track content engagement on your site. By monitoring how far down the page a visitor to your site travels and then recording the data in Google Analytics you can discover how many of your visitors are reading your content all the way to the end. Introduction When browsing through your Google Analytics reports you will find a massive selection of data at your fingertips. You can find how many people visited a page, how long they were there, what country they were...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >