In this case, should I use normal pagination or an infinite scroll?

Let me answer only as a user of websites. I have no UX designer research to support me, just this personal raw emotion and experience as an (admittedly perhaps unusual) end user:

The mere thought of endless scrolling gives me visceral anger as the first reaction (anger is omitted for brevity). When I notice that there is an infinite amount of scrolling on a page, my immediate thought is: "Oh, great, I hope I never really have to go very far down."

I think infinite scroll doesn't have to be bad, but I think the following questions are worth asking:

  1. How likely are your users to want to go back to a certain point in the content? If so, do you have a good search function (specific to your content, e.g. a search for time ranges for content with a clear chronological order) that can meet these requirements? Do you have a way to bookmark a particular point in your infinitely scrollable content?

  2. How big is your content in terms of screen area? Sometimes I'm on a social media site and I want to catch up on all the new things in my feeds / watches / subscriptions / whatever: if the content is big, just scrolling becomes annoying and it also makes it harder and harder It is more time consuming to find exactly what I'm looking for. I want to come back to that.

  3. How big is your content in relation to the actual memory consumption of the computer? Once I wanted to go through someone's Instagram account for a couple of years: halfway my somewhat high-end computer on this website started to slow down because if you scroll endlessly, you probably won't unload the content above (Instagram it certainly wasn't), so my browser collected gigabytes of image data in memory.

In summary: Does your general use case mostly support the occasional search of a data record that is larger than a single page but not significantly larger?

Disclaimer: I know I'm funny. My personal preference will almost always be pagination. I surf web pagination More pleasant for me to scroll. I don't expect the world to align with me. I just want the option to use pagination on pages that are important to me (unfortunately often not), or at least go back a way or find a certain point in the infinite scroll if I have to deal with it.

P.S. Are you interested in users who surf with restricted or without activated JavaScript? It is worth remembering that there are such people, and there are several good reasons (accessibility requirements and computer security awareness are the main argument, although there are others) to surf with scripts disabled by default. Infinite scrolling can only work with JavaScript enabled, while pagination with JavaScript can be made more user-friendly, but can deteriorate more directly to simple HTTP requests if necessary.

python – How do I track multiple URL pagination from a list of start_urls?

I want to track the entire URL from the "start_urls" list and follow the pagination of each start_url and scrap the content in it. I was able to remove and track pagination from just one URL from the list of URLs. I tried to follow the url of the next page

next_page_url = response.request.url + 'page/' + str(Couponsite2SpiderSpider.page_number)
if next_page_url is not None:
    Couponsite2SpiderSpider.page_number += 1 
    yield response.follow(next_page_url, callback=self.parse)        
else:
    Couponsite2SpiderSpider.page_number = 2

But didn't get the results I wanted. My spider code is given below

class Couponsite2SpiderSpider(scrapy.Spider):
    name = 'couponSite2_spider'
    allowed_domains = ('www.uaepayingless.com')
    page_number = 2
    def start_requests(self):
        start_urls = reversed((
            'https://www.uaepayingless.com/coupon-category/entertainment/',
            'https://www.uaepayingless.com/coupon-category/fashion-accessories/',
            'https://www.uaepayingless.com/coupon-category/food-beverage/'
        ))
        return (Request(url = start_url) for start_url in start_urls)

    def parse(self, response):

        store = store = response.css('#store-listings-wrapper')
        coupon_category = store.xpath('h2/text()').extract()
        coupon_lists = store.css('#cat-coupon-lists')

        for coupon in coupon_lists.xpath('div'):
            coupon_title = coupon.xpath('div(2)/h3/a/text()').extract()
            coupon_descriptions = coupon.css('div > div.latest-coupon > div')
            for description in coupon_descriptions:
                final_description =  final_description = (''.join(description.xpath('.//div(@class="coupon-des-full")//text()').extract()).strip().replace('n', ' ').replace('Less', '').replace('Move to Trash', '').strip())

                if len(final_description(0)) == 0:
                    final_description = description.css('div.coupon-des-ellip::text').extract()

            coupon_exp_date = coupon.xpath('normalize-space(.//div(@class="exp-text")/text())').extract()
            coupon_code_deal = coupon.xpath('normalize-space(.//div(@class="coupon-detail coupon-button-type")/a/@href)').extract()
            coupon_store_out = coupon.xpath('normalize-space(.//div(@class="coupon-detail coupon-button-type")/a/@data-aff-url)').extract()

            store_img_src = coupon.xpath('normalize-space(.//div(@class="store-thumb thumb-img")/a/img/@src)').extract()
            coupon_store_name = coupon.xpath('normalize-space(.//div(@class="store-name")/a/text())').extract()

            yield {
                'coupon_title': coupon_title,
                'coupon_description': final_description,
                'coupon_exp_date': coupon_exp_date,
                'coupon_code_deal': coupon_code_deal,
                'coupon_store_out': coupon_store_out,
                'store_img_src': store_img_src,
                'coupon_store_name': coupon_store_name,
                'coupon_category': coupon_category,
                'website_link': response.request.url
            }

        next_page_url = "https://www.uaepayingless.com/coupon-category/entertainment/" + 'page/' + str(Couponsite2SpiderSpider.page_number)
        if next_page_url is not None:
            Couponsite2SpiderSpider.page_number += 1 
            yield response.follow(next_page_url, callback=self.parse)

Custom post types on the filter and pagination page

Apply pagination to the filtered results when the filter adds the string
& # 39 ;? filtr = 1 & # 39; for the domain name?

The filter works as follows …
On my website (http://arche.cypis.net.pl/), after clicking the & # 39; NAZWA & # 39; filter in the header, I get the filtered results that & # 39; project-specific posts & # 39; in alphabetical order, but I can't work
how to make it work with pagination because after clicking on pagination links I get URLs

My attempt follows:

if ($ _GET ("filtr") == 1) {

$ args = array (
& # 39; # 39 & post_type; => array (& # 39; project & # 39;), // posts
& # 39; # 39 & orderby; => & # 39; title & # 39 ;,
& # 39; order & # 39; => & # 39; ASC & # 39 ;,
& # 39; # 39 & posts_per_page; => 8, // Get all posts
& # 39; # 39 & ignore_sticky_posts; => true, // do not keep the order of the stickies
);
$ paged = (get_query_var (& # 39; page & # 39;))? get_query_var (& # 39; page & # 39;): 1;
$ the_query = new WP_Query ($ args);

echo & # 39;
& # 39 ;; echo paginate_links (array ( & # 39; base & # 39; => get_pagenum_link (1). & # 39;% _% & # 39 ;, & # 39; format & # 39; => & # 39; page /% #% & # 39 ;, & # 39; current & # 39; => max (1, get_query_var (& # 39; page & # 39;)), & # 39; total & # 39; => $ the_query-> max_num_pages, & # 39; # 39 & prev_text; => __ (& # 39; «poprzedni & # 39;), & # 39; # 39 & next_text; => __ (& # 39; następny »& # 39;), & # 39; page & # 39; => $ paged, "enable_filter" => & # 39; title & # 39; )); echo & # 39;
& # 39 ;; while ($ the_query-> have_posts ()): $ the_query-> the_post (); ?>
<a href = "">
<img src = "http://wordpress.stackexchange.com/"alt =""/>

m2

<? php endwhile; wp_reset_query (); }

Sorting – Pagination of results from different sources, which are brought together by a uniform evaluation function

Assume a given hotel reservation scenario $ m $ Ranking lists of attribute values ​​such as distance, price, features (normalized between $ 0 $ and $ 1 $) and a uniform linear evaluation function $ F ( cdot) = alpha_1 * Score_1 + alpha_2 * Score_2 + alpha_3 * Score_3 $the threshold algorithm (TA) is optimal to$ k $ Results that are higher $ F $ Values.

However, consider a page index pagination scenario $ p $ and page size $ k $, Indeed, instead of asking for top$ k $ This can be obtained from the indices (0, k) in the final ranking. We ask for (pk, (p + 1) k). What is the best solution to get this window of results?

You can view this problem as pagination of the merged results using a unified valuation function if there are several data sources, each of which contains a valuation value, but the merged results have a combined valuation value as a (linear) function of the individual valuation values.

Some solutions:

Completely naive: calculate the unified score, sort it and cut it as needed.

Potentially better, but inefficient in querying lower ranked results (more pages):
Run the threshold algorithm and ask for top- (p + 1) k, return the (pk, (p + 1) k) from it.

Pagination Next / Previous does not always work on the iPhone

I think that's an Ultimo theme. The author has stopped updating the topic. I have this topic on Magento 2.3.3. Pagination Next / Previous does not always work on the iPhone. Sometimes it works after double-clicking with no response or one-click error in the console. As you can see from the screenshot of an iPhone, a click on page 2 is marked with the number 2 highlighted in black.

Enter image description here

magento2 – Ajax pagination problem with Varnish

In Store Magento 2.3.3 with Varnish 6, Redis 5 and Amasty Layered Navigation for filtering and paginating Ajax calls is activated.
After setting up the above configuration, navigate through the category pages. Previous and next page clicks via Ajax calls work fine.
Next day or after 24 hours, no response after clicking previous and next category pages. Problem disappears after clearing the page cache.

Topic development – problem with pagination link (error 404)

I have a problem with WordPress patination.

The maximum number of posts (custom post properties) per page is 9.

When I press 11 properties, the pagination links work properly. 9 objects on the first page and 2 on the second page.

However, if I publish 10 items, the link to the second page works best. A 404 error is returned.

The loop code:




 'propiedades',
    'post_status'    => 'publish',
    'posts_per_page' => 9,
    'paged'          => $actualPagina
);
$propiedades   = new WP_Query( $arrParamsProp );
?>
have_posts() ) { while ( $propiedades->have_posts() ) { $propiedades->the_post(); get_template_part( 'propiedad_card' ); } } ?>

The pagination code (paginacion_prop):

 get_pagenum_link( 1 ) . $context . '%_%',
    'type'      => 'array',
    'total'     => $propiedades->max_num_pages,
    'format'    => 'page/%#%',
    'current'   => $actualPagina,
    'prev_text' => '«',
    'next_text' => '»'
);
$arrPaginacion = paginate_links( $argsPaginate );
?>

noindex – WordPress / Yoast: How do I solve the pagination page problem?

Before I switched to YOAST, I used all-in-one SEO, but I have some big problems with it. There were incompatibilities that made it impossible to continue using this plugin.

In any case, I have a problem paginating my website. Here is the example:

Start: http://www.wildsolutions.at

Pagination: http://www.wildsolutions.at/page/2

This / page / 2 creates problems related to SEO: duplicate content, double h1, double description and so on.

With all in one SEO I was able to put a term on no subsequent pages / no index pages. I can't find anything comparable in YOAST.

I think that's a general WordPress thing.

I was able to find a "solution" for this, but that brings up another problem.

If I change the settings of permalinks from "postname" to "default" -> the page disappears / 2. But then the links of my pages are not nice to read (p? = 123). Take a look at the example.
Post name in the settings for permalinks
Broken links after switching to standard in permalinks

Can you tell me how to solve this problem?

Thanks in advance and greetings,
Philip